00:00:00.000 Started by upstream project "autotest-nightly" build number 4247 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3610 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.015 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.016 The recommended git tool is: git 00:00:00.017 using credential 00000000-0000-0000-0000-000000000002 00:00:00.018 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.038 Fetching changes from the remote Git repository 00:00:00.039 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.072 Using shallow fetch with depth 1 00:00:00.072 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.072 > git --version # timeout=10 00:00:00.141 > git --version # 'git version 2.39.2' 00:00:00.141 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.217 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.217 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.283 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.298 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.315 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:02.315 > git config core.sparsecheckout # timeout=10 00:00:02.327 > git read-tree -mu HEAD # timeout=10 00:00:02.345 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:02.366 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:02.367 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:02.477 [Pipeline] Start of Pipeline 00:00:02.490 [Pipeline] library 00:00:02.492 Loading library shm_lib@master 00:00:02.492 Library shm_lib@master is cached. Copying from home. 00:00:02.508 [Pipeline] node 00:00:17.511 Still waiting to schedule task 00:00:17.511 Waiting for next available executor on ‘vagrant-vm-host’ 00:20:04.914 Running on VM-host-SM38 in /var/jenkins/workspace/raid-vg-autotest 00:20:04.927 [Pipeline] { 00:20:04.939 [Pipeline] catchError 00:20:04.940 [Pipeline] { 00:20:04.958 [Pipeline] wrap 00:20:04.968 [Pipeline] { 00:20:04.978 [Pipeline] stage 00:20:04.980 [Pipeline] { (Prologue) 00:20:05.001 [Pipeline] echo 00:20:05.003 Node: VM-host-SM38 00:20:05.010 [Pipeline] cleanWs 00:20:05.022 [WS-CLEANUP] Deleting project workspace... 00:20:05.022 [WS-CLEANUP] Deferred wipeout is used... 00:20:05.028 [WS-CLEANUP] done 00:20:05.251 [Pipeline] setCustomBuildProperty 00:20:05.342 [Pipeline] httpRequest 00:20:05.739 [Pipeline] echo 00:20:05.741 Sorcerer 10.211.164.101 is alive 00:20:05.751 [Pipeline] retry 00:20:05.754 [Pipeline] { 00:20:05.768 [Pipeline] httpRequest 00:20:05.773 HttpMethod: GET 00:20:05.773 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:20:05.774 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:20:05.775 Response Code: HTTP/1.1 200 OK 00:20:05.775 Success: Status code 200 is in the accepted range: 200,404 00:20:05.775 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:20:05.920 [Pipeline] } 00:20:05.939 [Pipeline] // retry 00:20:05.948 [Pipeline] sh 00:20:06.232 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:20:06.246 [Pipeline] httpRequest 00:20:06.640 [Pipeline] echo 00:20:06.642 Sorcerer 10.211.164.101 is alive 00:20:06.654 [Pipeline] retry 00:20:06.656 [Pipeline] { 00:20:06.671 [Pipeline] httpRequest 00:20:06.676 HttpMethod: GET 00:20:06.677 URL: http://10.211.164.101/packages/spdk_f220d590c6819ff8422b3dca9f8a36dc26cf9429.tar.gz 00:20:06.677 Sending request to url: http://10.211.164.101/packages/spdk_f220d590c6819ff8422b3dca9f8a36dc26cf9429.tar.gz 00:20:06.679 Response Code: HTTP/1.1 200 OK 00:20:06.679 Success: Status code 200 is in the accepted range: 200,404 00:20:06.680 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_f220d590c6819ff8422b3dca9f8a36dc26cf9429.tar.gz 00:20:09.072 [Pipeline] } 00:20:09.090 [Pipeline] // retry 00:20:09.098 [Pipeline] sh 00:20:09.378 + tar --no-same-owner -xf spdk_f220d590c6819ff8422b3dca9f8a36dc26cf9429.tar.gz 00:20:12.700 [Pipeline] sh 00:20:12.978 + git -C spdk log --oneline -n5 00:20:12.979 f220d590c nvmf: rename passthrough_nsid -> passthru_nsid 00:20:12.979 1a1586409 nvmf: use bdev's nsid for admin command passthru 00:20:12.979 892c29f49 nvmf: pass nsid to nvmf_ctrlr_identify_ns() 00:20:12.979 fb6c49f2f bdev: add spdk_bdev_get_nvme_nsid() 00:20:12.979 427304da7 lib/reduce: Reset req->reduce_errno 00:20:13.001 [Pipeline] writeFile 00:20:13.027 [Pipeline] sh 00:20:13.304 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:20:13.315 [Pipeline] sh 00:20:13.591 + cat autorun-spdk.conf 00:20:13.591 SPDK_RUN_FUNCTIONAL_TEST=1 00:20:13.591 SPDK_RUN_ASAN=1 00:20:13.591 SPDK_RUN_UBSAN=1 00:20:13.591 SPDK_TEST_RAID=1 00:20:13.591 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:20:13.596 RUN_NIGHTLY=1 00:20:13.598 [Pipeline] } 00:20:13.612 [Pipeline] // stage 00:20:13.628 [Pipeline] stage 00:20:13.631 [Pipeline] { (Run VM) 00:20:13.643 [Pipeline] sh 00:20:13.921 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:20:13.921 + echo 'Start stage prepare_nvme.sh' 00:20:13.921 Start stage prepare_nvme.sh 00:20:13.921 + [[ -n 0 ]] 00:20:13.921 + disk_prefix=ex0 00:20:13.921 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:20:13.921 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:20:13.921 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:20:13.921 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:20:13.921 ++ SPDK_RUN_ASAN=1 00:20:13.921 ++ SPDK_RUN_UBSAN=1 00:20:13.921 ++ SPDK_TEST_RAID=1 00:20:13.921 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:20:13.921 ++ RUN_NIGHTLY=1 00:20:13.921 + cd /var/jenkins/workspace/raid-vg-autotest 00:20:13.921 + nvme_files=() 00:20:13.921 + declare -A nvme_files 00:20:13.921 + backend_dir=/var/lib/libvirt/images/backends 00:20:13.921 + nvme_files['nvme.img']=5G 00:20:13.921 + nvme_files['nvme-cmb.img']=5G 00:20:13.921 + nvme_files['nvme-multi0.img']=4G 00:20:13.921 + nvme_files['nvme-multi1.img']=4G 00:20:13.921 + nvme_files['nvme-multi2.img']=4G 00:20:13.921 + nvme_files['nvme-openstack.img']=8G 00:20:13.921 + nvme_files['nvme-zns.img']=5G 00:20:13.921 + (( SPDK_TEST_NVME_PMR == 1 )) 00:20:13.921 + (( SPDK_TEST_FTL == 1 )) 00:20:13.921 + (( SPDK_TEST_NVME_FDP == 1 )) 00:20:13.921 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:20:13.921 + for nvme in "${!nvme_files[@]}" 00:20:13.921 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:20:13.921 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:20:13.921 + for nvme in "${!nvme_files[@]}" 00:20:13.921 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:20:13.921 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:20:13.921 + for nvme in "${!nvme_files[@]}" 00:20:13.921 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:20:13.921 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:20:13.921 + for nvme in "${!nvme_files[@]}" 00:20:13.921 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:20:13.921 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:20:13.921 + for nvme in "${!nvme_files[@]}" 00:20:13.921 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:20:13.921 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:20:13.921 + for nvme in "${!nvme_files[@]}" 00:20:13.921 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:20:13.921 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:20:13.921 + for nvme in "${!nvme_files[@]}" 00:20:13.921 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:20:14.855 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:20:14.855 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:20:14.855 + echo 'End stage prepare_nvme.sh' 00:20:14.855 End stage prepare_nvme.sh 00:20:14.866 [Pipeline] sh 00:20:15.144 + DISTRO=fedora39 00:20:15.144 + CPUS=10 00:20:15.144 + RAM=12288 00:20:15.144 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:20:15.144 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora39 00:20:15.144 00:20:15.144 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:20:15.144 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:20:15.144 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:20:15.144 HELP=0 00:20:15.144 DRY_RUN=0 00:20:15.144 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:20:15.144 NVME_DISKS_TYPE=nvme,nvme, 00:20:15.144 NVME_AUTO_CREATE=0 00:20:15.144 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:20:15.144 NVME_CMB=,, 00:20:15.144 NVME_PMR=,, 00:20:15.144 NVME_ZNS=,, 00:20:15.144 NVME_MS=,, 00:20:15.144 NVME_FDP=,, 00:20:15.144 SPDK_VAGRANT_DISTRO=fedora39 00:20:15.144 SPDK_VAGRANT_VMCPU=10 00:20:15.144 SPDK_VAGRANT_VMRAM=12288 00:20:15.144 SPDK_VAGRANT_PROVIDER=libvirt 00:20:15.144 SPDK_VAGRANT_HTTP_PROXY= 00:20:15.144 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:20:15.144 SPDK_OPENSTACK_NETWORK=0 00:20:15.144 VAGRANT_PACKAGE_BOX=0 00:20:15.144 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:20:15.144 FORCE_DISTRO=true 00:20:15.144 VAGRANT_BOX_VERSION= 00:20:15.144 EXTRA_VAGRANTFILES= 00:20:15.144 NIC_MODEL=e1000 00:20:15.144 00:20:15.144 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:20:15.144 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:20:17.672 Bringing machine 'default' up with 'libvirt' provider... 00:20:18.238 ==> default: Creating image (snapshot of base box volume). 00:20:18.238 ==> default: Creating domain with the following settings... 00:20:18.238 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730821790_ccee773c668805402440 00:20:18.238 ==> default: -- Domain type: kvm 00:20:18.238 ==> default: -- Cpus: 10 00:20:18.238 ==> default: -- Feature: acpi 00:20:18.238 ==> default: -- Feature: apic 00:20:18.238 ==> default: -- Feature: pae 00:20:18.238 ==> default: -- Memory: 12288M 00:20:18.238 ==> default: -- Memory Backing: hugepages: 00:20:18.238 ==> default: -- Management MAC: 00:20:18.238 ==> default: -- Loader: 00:20:18.238 ==> default: -- Nvram: 00:20:18.238 ==> default: -- Base box: spdk/fedora39 00:20:18.238 ==> default: -- Storage pool: default 00:20:18.238 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730821790_ccee773c668805402440.img (20G) 00:20:18.238 ==> default: -- Volume Cache: default 00:20:18.238 ==> default: -- Kernel: 00:20:18.238 ==> default: -- Initrd: 00:20:18.238 ==> default: -- Graphics Type: vnc 00:20:18.238 ==> default: -- Graphics Port: -1 00:20:18.238 ==> default: -- Graphics IP: 127.0.0.1 00:20:18.238 ==> default: -- Graphics Password: Not defined 00:20:18.238 ==> default: -- Video Type: cirrus 00:20:18.238 ==> default: -- Video VRAM: 9216 00:20:18.238 ==> default: -- Sound Type: 00:20:18.238 ==> default: -- Keymap: en-us 00:20:18.238 ==> default: -- TPM Path: 00:20:18.238 ==> default: -- INPUT: type=mouse, bus=ps2 00:20:18.238 ==> default: -- Command line args: 00:20:18.238 ==> default: -> value=-device, 00:20:18.238 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:20:18.238 ==> default: -> value=-drive, 00:20:18.238 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:20:18.238 ==> default: -> value=-device, 00:20:18.238 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:20:18.238 ==> default: -> value=-device, 00:20:18.238 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:20:18.238 ==> default: -> value=-drive, 00:20:18.238 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:20:18.238 ==> default: -> value=-device, 00:20:18.238 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:20:18.238 ==> default: -> value=-drive, 00:20:18.238 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:20:18.238 ==> default: -> value=-device, 00:20:18.238 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:20:18.238 ==> default: -> value=-drive, 00:20:18.238 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:20:18.238 ==> default: -> value=-device, 00:20:18.238 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:20:18.238 ==> default: Creating shared folders metadata... 00:20:18.238 ==> default: Starting domain. 00:20:19.619 ==> default: Waiting for domain to get an IP address... 00:20:34.484 ==> default: Waiting for SSH to become available... 00:20:34.484 ==> default: Configuring and enabling network interfaces... 00:20:37.011 default: SSH address: 192.168.121.182:22 00:20:37.011 default: SSH username: vagrant 00:20:37.011 default: SSH auth method: private key 00:20:38.911 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:20:45.508 ==> default: Mounting SSHFS shared folder... 00:20:46.887 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:20:46.887 ==> default: Checking Mount.. 00:20:47.819 ==> default: Folder Successfully Mounted! 00:20:47.819 00:20:47.819 SUCCESS! 00:20:47.819 00:20:47.819 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:20:47.819 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:20:47.819 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:20:47.819 00:20:47.826 [Pipeline] } 00:20:47.840 [Pipeline] // stage 00:20:47.847 [Pipeline] dir 00:20:47.848 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:20:47.849 [Pipeline] { 00:20:47.861 [Pipeline] catchError 00:20:47.862 [Pipeline] { 00:20:47.874 [Pipeline] sh 00:20:48.206 + vagrant ssh-config --host vagrant 00:20:48.206 + sed -ne '/^Host/,$p' 00:20:48.206 + tee ssh_conf 00:20:50.742 Host vagrant 00:20:50.742 HostName 192.168.121.182 00:20:50.742 User vagrant 00:20:50.742 Port 22 00:20:50.742 UserKnownHostsFile /dev/null 00:20:50.742 StrictHostKeyChecking no 00:20:50.742 PasswordAuthentication no 00:20:50.742 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:20:50.742 IdentitiesOnly yes 00:20:50.742 LogLevel FATAL 00:20:50.742 ForwardAgent yes 00:20:50.742 ForwardX11 yes 00:20:50.742 00:20:50.780 [Pipeline] withEnv 00:20:50.781 [Pipeline] { 00:20:50.791 [Pipeline] sh 00:20:51.059 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:20:51.059 source /etc/os-release 00:20:51.059 [[ -e /image.version ]] && img=$(< /image.version) 00:20:51.059 # Minimal, systemd-like check. 00:20:51.059 if [[ -e /.dockerenv ]]; then 00:20:51.059 # Clear garbage from the node'\''s name: 00:20:51.059 # agt-er_autotest_547-896 -> autotest_547-896 00:20:51.059 # $HOSTNAME is the actual container id 00:20:51.059 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:20:51.059 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:20:51.059 # We can assume this is a mount from a host where container is running, 00:20:51.059 # so fetch its hostname to easily identify the target swarm worker. 00:20:51.059 container="$(< /etc/hostname) ($agent)" 00:20:51.059 else 00:20:51.059 # Fallback 00:20:51.059 container=$agent 00:20:51.059 fi 00:20:51.059 fi 00:20:51.059 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:20:51.059 ' 00:20:51.068 [Pipeline] } 00:20:51.089 [Pipeline] // withEnv 00:20:51.097 [Pipeline] setCustomBuildProperty 00:20:51.112 [Pipeline] stage 00:20:51.116 [Pipeline] { (Tests) 00:20:51.136 [Pipeline] sh 00:20:51.411 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:20:51.681 [Pipeline] sh 00:20:51.957 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:20:51.970 [Pipeline] timeout 00:20:51.971 Timeout set to expire in 1 hr 30 min 00:20:51.973 [Pipeline] { 00:20:51.988 [Pipeline] sh 00:20:52.264 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:20:52.829 HEAD is now at f220d590c nvmf: rename passthrough_nsid -> passthru_nsid 00:20:52.842 [Pipeline] sh 00:20:53.120 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:20:53.132 [Pipeline] sh 00:20:53.409 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:20:53.425 [Pipeline] sh 00:20:53.701 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo' 00:20:53.701 ++ readlink -f spdk_repo 00:20:53.701 + DIR_ROOT=/home/vagrant/spdk_repo 00:20:53.701 + [[ -n /home/vagrant/spdk_repo ]] 00:20:53.701 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:20:53.701 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:20:53.701 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:20:53.701 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:20:53.701 + [[ -d /home/vagrant/spdk_repo/output ]] 00:20:53.701 + [[ raid-vg-autotest == pkgdep-* ]] 00:20:53.701 + cd /home/vagrant/spdk_repo 00:20:53.701 + source /etc/os-release 00:20:53.701 ++ NAME='Fedora Linux' 00:20:53.701 ++ VERSION='39 (Cloud Edition)' 00:20:53.701 ++ ID=fedora 00:20:53.701 ++ VERSION_ID=39 00:20:53.701 ++ VERSION_CODENAME= 00:20:53.701 ++ PLATFORM_ID=platform:f39 00:20:53.701 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:20:53.701 ++ ANSI_COLOR='0;38;2;60;110;180' 00:20:53.701 ++ LOGO=fedora-logo-icon 00:20:53.701 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:20:53.701 ++ HOME_URL=https://fedoraproject.org/ 00:20:53.701 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:20:53.701 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:20:53.701 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:20:53.701 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:20:53.701 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:20:53.701 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:20:53.701 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:20:53.701 ++ SUPPORT_END=2024-11-12 00:20:53.701 ++ VARIANT='Cloud Edition' 00:20:53.701 ++ VARIANT_ID=cloud 00:20:53.701 + uname -a 00:20:53.701 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:20:53.701 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:20:54.267 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:54.267 Hugepages 00:20:54.267 node hugesize free / total 00:20:54.267 node0 1048576kB 0 / 0 00:20:54.267 node0 2048kB 0 / 0 00:20:54.267 00:20:54.267 Type BDF Vendor Device NUMA Driver Device Block devices 00:20:54.267 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:20:54.267 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:20:54.267 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:20:54.267 + rm -f /tmp/spdk-ld-path 00:20:54.267 + source autorun-spdk.conf 00:20:54.267 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:20:54.267 ++ SPDK_RUN_ASAN=1 00:20:54.267 ++ SPDK_RUN_UBSAN=1 00:20:54.267 ++ SPDK_TEST_RAID=1 00:20:54.267 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:20:54.267 ++ RUN_NIGHTLY=1 00:20:54.267 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:20:54.267 + [[ -n '' ]] 00:20:54.267 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:20:54.267 + for M in /var/spdk/build-*-manifest.txt 00:20:54.267 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:20:54.267 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:20:54.267 + for M in /var/spdk/build-*-manifest.txt 00:20:54.267 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:20:54.267 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:20:54.267 + for M in /var/spdk/build-*-manifest.txt 00:20:54.267 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:20:54.267 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:20:54.267 ++ uname 00:20:54.267 + [[ Linux == \L\i\n\u\x ]] 00:20:54.267 + sudo dmesg -T 00:20:54.267 + sudo dmesg --clear 00:20:54.267 + dmesg_pid=5001 00:20:54.267 + sudo dmesg -Tw 00:20:54.267 + [[ Fedora Linux == FreeBSD ]] 00:20:54.267 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:20:54.267 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:20:54.267 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:20:54.267 + [[ -x /usr/src/fio-static/fio ]] 00:20:54.267 + export FIO_BIN=/usr/src/fio-static/fio 00:20:54.267 + FIO_BIN=/usr/src/fio-static/fio 00:20:54.267 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:20:54.267 + [[ ! -v VFIO_QEMU_BIN ]] 00:20:54.267 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:20:54.267 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:20:54.267 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:20:54.267 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:20:54.267 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:20:54.267 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:20:54.267 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:20:54.267 15:50:26 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:20:54.267 15:50:26 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:20:54.267 15:50:26 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:20:54.267 15:50:26 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:20:54.267 15:50:26 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:20:54.267 15:50:26 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:20:54.267 15:50:26 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:20:54.267 15:50:26 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=1 00:20:54.267 15:50:26 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:20:54.267 15:50:26 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:20:54.527 15:50:26 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:20:54.527 15:50:26 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:54.527 15:50:26 -- scripts/common.sh@15 -- $ shopt -s extglob 00:20:54.527 15:50:26 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:20:54.527 15:50:26 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:54.527 15:50:26 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:54.527 15:50:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.527 15:50:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.527 15:50:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.527 15:50:26 -- paths/export.sh@5 -- $ export PATH 00:20:54.527 15:50:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.527 15:50:26 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:20:54.527 15:50:26 -- common/autobuild_common.sh@486 -- $ date +%s 00:20:54.527 15:50:26 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730821826.XXXXXX 00:20:54.527 15:50:26 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730821826.4SckGm 00:20:54.527 15:50:26 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:20:54.527 15:50:26 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:20:54.527 15:50:26 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:20:54.527 15:50:26 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:20:54.527 15:50:26 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:20:54.527 15:50:26 -- common/autobuild_common.sh@502 -- $ get_config_params 00:20:54.527 15:50:26 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:20:54.527 15:50:26 -- common/autotest_common.sh@10 -- $ set +x 00:20:54.527 15:50:26 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:20:54.527 15:50:26 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:20:54.527 15:50:26 -- pm/common@17 -- $ local monitor 00:20:54.527 15:50:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:54.527 15:50:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:54.527 15:50:26 -- pm/common@25 -- $ sleep 1 00:20:54.527 15:50:26 -- pm/common@21 -- $ date +%s 00:20:54.527 15:50:26 -- pm/common@21 -- $ date +%s 00:20:54.527 15:50:26 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730821826 00:20:54.528 15:50:26 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730821826 00:20:54.528 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730821826_collect-cpu-load.pm.log 00:20:54.528 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730821826_collect-vmstat.pm.log 00:20:55.461 15:50:27 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:20:55.461 15:50:27 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:20:55.461 15:50:27 -- spdk/autobuild.sh@12 -- $ umask 022 00:20:55.461 15:50:27 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:20:55.461 15:50:27 -- spdk/autobuild.sh@16 -- $ date -u 00:20:55.461 Tue Nov 5 03:50:27 PM UTC 2024 00:20:55.461 15:50:27 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:20:55.461 v25.01-pre-158-gf220d590c 00:20:55.461 15:50:27 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:20:55.461 15:50:27 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:20:55.461 15:50:27 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:20:55.461 15:50:27 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:20:55.461 15:50:27 -- common/autotest_common.sh@10 -- $ set +x 00:20:55.461 ************************************ 00:20:55.461 START TEST asan 00:20:55.461 ************************************ 00:20:55.461 using asan 00:20:55.461 15:50:27 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:20:55.461 00:20:55.461 real 0m0.000s 00:20:55.461 user 0m0.000s 00:20:55.461 sys 0m0.000s 00:20:55.461 15:50:27 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:20:55.461 ************************************ 00:20:55.461 END TEST asan 00:20:55.461 ************************************ 00:20:55.461 15:50:27 asan -- common/autotest_common.sh@10 -- $ set +x 00:20:55.461 15:50:27 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:20:55.461 15:50:27 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:20:55.461 15:50:27 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:20:55.461 15:50:27 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:20:55.461 15:50:27 -- common/autotest_common.sh@10 -- $ set +x 00:20:55.461 ************************************ 00:20:55.461 START TEST ubsan 00:20:55.461 ************************************ 00:20:55.461 using ubsan 00:20:55.461 15:50:27 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:20:55.461 00:20:55.461 real 0m0.000s 00:20:55.461 user 0m0.000s 00:20:55.461 sys 0m0.000s 00:20:55.461 15:50:27 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:20:55.462 15:50:27 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:20:55.462 ************************************ 00:20:55.462 END TEST ubsan 00:20:55.462 ************************************ 00:20:55.462 15:50:27 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:20:55.462 15:50:27 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:20:55.462 15:50:27 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:20:55.462 15:50:27 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:20:55.462 15:50:27 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:20:55.462 15:50:27 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:20:55.462 15:50:27 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:20:55.462 15:50:27 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:20:55.462 15:50:27 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:20:55.730 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:20:55.730 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:20:55.989 Using 'verbs' RDMA provider 00:21:06.905 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:21:16.867 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:21:16.867 Creating mk/config.mk...done. 00:21:16.867 Creating mk/cc.flags.mk...done. 00:21:16.867 Type 'make' to build. 00:21:16.867 15:50:48 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:21:16.867 15:50:48 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:21:16.867 15:50:48 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:21:16.867 15:50:48 -- common/autotest_common.sh@10 -- $ set +x 00:21:16.867 ************************************ 00:21:16.867 START TEST make 00:21:16.867 ************************************ 00:21:16.867 15:50:48 make -- common/autotest_common.sh@1127 -- $ make -j10 00:21:16.867 make[1]: Nothing to be done for 'all'. 00:21:26.866 The Meson build system 00:21:26.866 Version: 1.5.0 00:21:26.866 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:21:26.866 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:21:26.866 Build type: native build 00:21:26.866 Program cat found: YES (/usr/bin/cat) 00:21:26.866 Project name: DPDK 00:21:26.866 Project version: 24.03.0 00:21:26.866 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:21:26.866 C linker for the host machine: cc ld.bfd 2.40-14 00:21:26.866 Host machine cpu family: x86_64 00:21:26.866 Host machine cpu: x86_64 00:21:26.866 Message: ## Building in Developer Mode ## 00:21:26.866 Program pkg-config found: YES (/usr/bin/pkg-config) 00:21:26.866 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:21:26.866 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:21:26.866 Program python3 found: YES (/usr/bin/python3) 00:21:26.866 Program cat found: YES (/usr/bin/cat) 00:21:26.866 Compiler for C supports arguments -march=native: YES 00:21:26.866 Checking for size of "void *" : 8 00:21:26.866 Checking for size of "void *" : 8 (cached) 00:21:26.866 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:21:26.866 Library m found: YES 00:21:26.866 Library numa found: YES 00:21:26.866 Has header "numaif.h" : YES 00:21:26.866 Library fdt found: NO 00:21:26.866 Library execinfo found: NO 00:21:26.866 Has header "execinfo.h" : YES 00:21:26.866 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:21:26.866 Run-time dependency libarchive found: NO (tried pkgconfig) 00:21:26.866 Run-time dependency libbsd found: NO (tried pkgconfig) 00:21:26.866 Run-time dependency jansson found: NO (tried pkgconfig) 00:21:26.866 Run-time dependency openssl found: YES 3.1.1 00:21:26.866 Run-time dependency libpcap found: YES 1.10.4 00:21:26.866 Has header "pcap.h" with dependency libpcap: YES 00:21:26.866 Compiler for C supports arguments -Wcast-qual: YES 00:21:26.866 Compiler for C supports arguments -Wdeprecated: YES 00:21:26.866 Compiler for C supports arguments -Wformat: YES 00:21:26.866 Compiler for C supports arguments -Wformat-nonliteral: NO 00:21:26.866 Compiler for C supports arguments -Wformat-security: NO 00:21:26.866 Compiler for C supports arguments -Wmissing-declarations: YES 00:21:26.866 Compiler for C supports arguments -Wmissing-prototypes: YES 00:21:26.866 Compiler for C supports arguments -Wnested-externs: YES 00:21:26.866 Compiler for C supports arguments -Wold-style-definition: YES 00:21:26.866 Compiler for C supports arguments -Wpointer-arith: YES 00:21:26.866 Compiler for C supports arguments -Wsign-compare: YES 00:21:26.866 Compiler for C supports arguments -Wstrict-prototypes: YES 00:21:26.866 Compiler for C supports arguments -Wundef: YES 00:21:26.866 Compiler for C supports arguments -Wwrite-strings: YES 00:21:26.866 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:21:26.866 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:21:26.867 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:21:26.867 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:21:26.867 Program objdump found: YES (/usr/bin/objdump) 00:21:26.867 Compiler for C supports arguments -mavx512f: YES 00:21:26.867 Checking if "AVX512 checking" compiles: YES 00:21:26.867 Fetching value of define "__SSE4_2__" : 1 00:21:26.867 Fetching value of define "__AES__" : 1 00:21:26.867 Fetching value of define "__AVX__" : 1 00:21:26.867 Fetching value of define "__AVX2__" : 1 00:21:26.867 Fetching value of define "__AVX512BW__" : 1 00:21:26.867 Fetching value of define "__AVX512CD__" : 1 00:21:26.867 Fetching value of define "__AVX512DQ__" : 1 00:21:26.867 Fetching value of define "__AVX512F__" : 1 00:21:26.867 Fetching value of define "__AVX512VL__" : 1 00:21:26.867 Fetching value of define "__PCLMUL__" : 1 00:21:26.867 Fetching value of define "__RDRND__" : 1 00:21:26.867 Fetching value of define "__RDSEED__" : 1 00:21:26.867 Fetching value of define "__VPCLMULQDQ__" : 1 00:21:26.867 Fetching value of define "__znver1__" : (undefined) 00:21:26.867 Fetching value of define "__znver2__" : (undefined) 00:21:26.867 Fetching value of define "__znver3__" : (undefined) 00:21:26.867 Fetching value of define "__znver4__" : (undefined) 00:21:26.867 Library asan found: YES 00:21:26.867 Compiler for C supports arguments -Wno-format-truncation: YES 00:21:26.867 Message: lib/log: Defining dependency "log" 00:21:26.867 Message: lib/kvargs: Defining dependency "kvargs" 00:21:26.867 Message: lib/telemetry: Defining dependency "telemetry" 00:21:26.867 Library rt found: YES 00:21:26.867 Checking for function "getentropy" : NO 00:21:26.867 Message: lib/eal: Defining dependency "eal" 00:21:26.867 Message: lib/ring: Defining dependency "ring" 00:21:26.867 Message: lib/rcu: Defining dependency "rcu" 00:21:26.867 Message: lib/mempool: Defining dependency "mempool" 00:21:26.867 Message: lib/mbuf: Defining dependency "mbuf" 00:21:26.867 Fetching value of define "__PCLMUL__" : 1 (cached) 00:21:26.867 Fetching value of define "__AVX512F__" : 1 (cached) 00:21:26.867 Fetching value of define "__AVX512BW__" : 1 (cached) 00:21:26.867 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:21:26.867 Fetching value of define "__AVX512VL__" : 1 (cached) 00:21:26.867 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:21:26.867 Compiler for C supports arguments -mpclmul: YES 00:21:26.867 Compiler for C supports arguments -maes: YES 00:21:26.867 Compiler for C supports arguments -mavx512f: YES (cached) 00:21:26.867 Compiler for C supports arguments -mavx512bw: YES 00:21:26.867 Compiler for C supports arguments -mavx512dq: YES 00:21:26.867 Compiler for C supports arguments -mavx512vl: YES 00:21:26.867 Compiler for C supports arguments -mvpclmulqdq: YES 00:21:26.867 Compiler for C supports arguments -mavx2: YES 00:21:26.867 Compiler for C supports arguments -mavx: YES 00:21:26.867 Message: lib/net: Defining dependency "net" 00:21:26.867 Message: lib/meter: Defining dependency "meter" 00:21:26.867 Message: lib/ethdev: Defining dependency "ethdev" 00:21:26.867 Message: lib/pci: Defining dependency "pci" 00:21:26.867 Message: lib/cmdline: Defining dependency "cmdline" 00:21:26.867 Message: lib/hash: Defining dependency "hash" 00:21:26.867 Message: lib/timer: Defining dependency "timer" 00:21:26.867 Message: lib/compressdev: Defining dependency "compressdev" 00:21:26.867 Message: lib/cryptodev: Defining dependency "cryptodev" 00:21:26.867 Message: lib/dmadev: Defining dependency "dmadev" 00:21:26.867 Compiler for C supports arguments -Wno-cast-qual: YES 00:21:26.867 Message: lib/power: Defining dependency "power" 00:21:26.867 Message: lib/reorder: Defining dependency "reorder" 00:21:26.867 Message: lib/security: Defining dependency "security" 00:21:26.867 Has header "linux/userfaultfd.h" : YES 00:21:26.867 Has header "linux/vduse.h" : YES 00:21:26.867 Message: lib/vhost: Defining dependency "vhost" 00:21:26.867 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:21:26.867 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:21:26.867 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:21:26.867 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:21:26.867 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:21:26.867 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:21:26.867 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:21:26.867 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:21:26.867 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:21:26.867 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:21:26.867 Program doxygen found: YES (/usr/local/bin/doxygen) 00:21:26.867 Configuring doxy-api-html.conf using configuration 00:21:26.867 Configuring doxy-api-man.conf using configuration 00:21:26.867 Program mandb found: YES (/usr/bin/mandb) 00:21:26.867 Program sphinx-build found: NO 00:21:26.867 Configuring rte_build_config.h using configuration 00:21:26.867 Message: 00:21:26.867 ================= 00:21:26.867 Applications Enabled 00:21:26.867 ================= 00:21:26.867 00:21:26.867 apps: 00:21:26.867 00:21:26.867 00:21:26.867 Message: 00:21:26.867 ================= 00:21:26.867 Libraries Enabled 00:21:26.867 ================= 00:21:26.867 00:21:26.867 libs: 00:21:26.867 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:21:26.867 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:21:26.867 cryptodev, dmadev, power, reorder, security, vhost, 00:21:26.867 00:21:26.867 Message: 00:21:26.867 =============== 00:21:26.867 Drivers Enabled 00:21:26.867 =============== 00:21:26.867 00:21:26.867 common: 00:21:26.867 00:21:26.867 bus: 00:21:26.867 pci, vdev, 00:21:26.867 mempool: 00:21:26.867 ring, 00:21:26.867 dma: 00:21:26.867 00:21:26.867 net: 00:21:26.867 00:21:26.867 crypto: 00:21:26.867 00:21:26.867 compress: 00:21:26.867 00:21:26.867 vdpa: 00:21:26.867 00:21:26.867 00:21:26.867 Message: 00:21:26.867 ================= 00:21:26.867 Content Skipped 00:21:26.867 ================= 00:21:26.867 00:21:26.867 apps: 00:21:26.867 dumpcap: explicitly disabled via build config 00:21:26.867 graph: explicitly disabled via build config 00:21:26.867 pdump: explicitly disabled via build config 00:21:26.867 proc-info: explicitly disabled via build config 00:21:26.867 test-acl: explicitly disabled via build config 00:21:26.867 test-bbdev: explicitly disabled via build config 00:21:26.867 test-cmdline: explicitly disabled via build config 00:21:26.867 test-compress-perf: explicitly disabled via build config 00:21:26.867 test-crypto-perf: explicitly disabled via build config 00:21:26.867 test-dma-perf: explicitly disabled via build config 00:21:26.867 test-eventdev: explicitly disabled via build config 00:21:26.867 test-fib: explicitly disabled via build config 00:21:26.867 test-flow-perf: explicitly disabled via build config 00:21:26.867 test-gpudev: explicitly disabled via build config 00:21:26.867 test-mldev: explicitly disabled via build config 00:21:26.867 test-pipeline: explicitly disabled via build config 00:21:26.867 test-pmd: explicitly disabled via build config 00:21:26.867 test-regex: explicitly disabled via build config 00:21:26.867 test-sad: explicitly disabled via build config 00:21:26.867 test-security-perf: explicitly disabled via build config 00:21:26.867 00:21:26.867 libs: 00:21:26.867 argparse: explicitly disabled via build config 00:21:26.867 metrics: explicitly disabled via build config 00:21:26.867 acl: explicitly disabled via build config 00:21:26.867 bbdev: explicitly disabled via build config 00:21:26.867 bitratestats: explicitly disabled via build config 00:21:26.867 bpf: explicitly disabled via build config 00:21:26.867 cfgfile: explicitly disabled via build config 00:21:26.867 distributor: explicitly disabled via build config 00:21:26.867 efd: explicitly disabled via build config 00:21:26.867 eventdev: explicitly disabled via build config 00:21:26.867 dispatcher: explicitly disabled via build config 00:21:26.867 gpudev: explicitly disabled via build config 00:21:26.867 gro: explicitly disabled via build config 00:21:26.867 gso: explicitly disabled via build config 00:21:26.867 ip_frag: explicitly disabled via build config 00:21:26.867 jobstats: explicitly disabled via build config 00:21:26.867 latencystats: explicitly disabled via build config 00:21:26.867 lpm: explicitly disabled via build config 00:21:26.867 member: explicitly disabled via build config 00:21:26.867 pcapng: explicitly disabled via build config 00:21:26.867 rawdev: explicitly disabled via build config 00:21:26.867 regexdev: explicitly disabled via build config 00:21:26.867 mldev: explicitly disabled via build config 00:21:26.867 rib: explicitly disabled via build config 00:21:26.867 sched: explicitly disabled via build config 00:21:26.867 stack: explicitly disabled via build config 00:21:26.867 ipsec: explicitly disabled via build config 00:21:26.867 pdcp: explicitly disabled via build config 00:21:26.867 fib: explicitly disabled via build config 00:21:26.867 port: explicitly disabled via build config 00:21:26.867 pdump: explicitly disabled via build config 00:21:26.867 table: explicitly disabled via build config 00:21:26.867 pipeline: explicitly disabled via build config 00:21:26.867 graph: explicitly disabled via build config 00:21:26.867 node: explicitly disabled via build config 00:21:26.867 00:21:26.867 drivers: 00:21:26.867 common/cpt: not in enabled drivers build config 00:21:26.867 common/dpaax: not in enabled drivers build config 00:21:26.867 common/iavf: not in enabled drivers build config 00:21:26.867 common/idpf: not in enabled drivers build config 00:21:26.867 common/ionic: not in enabled drivers build config 00:21:26.867 common/mvep: not in enabled drivers build config 00:21:26.867 common/octeontx: not in enabled drivers build config 00:21:26.867 bus/auxiliary: not in enabled drivers build config 00:21:26.867 bus/cdx: not in enabled drivers build config 00:21:26.867 bus/dpaa: not in enabled drivers build config 00:21:26.867 bus/fslmc: not in enabled drivers build config 00:21:26.867 bus/ifpga: not in enabled drivers build config 00:21:26.867 bus/platform: not in enabled drivers build config 00:21:26.867 bus/uacce: not in enabled drivers build config 00:21:26.868 bus/vmbus: not in enabled drivers build config 00:21:26.868 common/cnxk: not in enabled drivers build config 00:21:26.868 common/mlx5: not in enabled drivers build config 00:21:26.868 common/nfp: not in enabled drivers build config 00:21:26.868 common/nitrox: not in enabled drivers build config 00:21:26.868 common/qat: not in enabled drivers build config 00:21:26.868 common/sfc_efx: not in enabled drivers build config 00:21:26.868 mempool/bucket: not in enabled drivers build config 00:21:26.868 mempool/cnxk: not in enabled drivers build config 00:21:26.868 mempool/dpaa: not in enabled drivers build config 00:21:26.868 mempool/dpaa2: not in enabled drivers build config 00:21:26.868 mempool/octeontx: not in enabled drivers build config 00:21:26.868 mempool/stack: not in enabled drivers build config 00:21:26.868 dma/cnxk: not in enabled drivers build config 00:21:26.868 dma/dpaa: not in enabled drivers build config 00:21:26.868 dma/dpaa2: not in enabled drivers build config 00:21:26.868 dma/hisilicon: not in enabled drivers build config 00:21:26.868 dma/idxd: not in enabled drivers build config 00:21:26.868 dma/ioat: not in enabled drivers build config 00:21:26.868 dma/skeleton: not in enabled drivers build config 00:21:26.868 net/af_packet: not in enabled drivers build config 00:21:26.868 net/af_xdp: not in enabled drivers build config 00:21:26.868 net/ark: not in enabled drivers build config 00:21:26.868 net/atlantic: not in enabled drivers build config 00:21:26.868 net/avp: not in enabled drivers build config 00:21:26.868 net/axgbe: not in enabled drivers build config 00:21:26.868 net/bnx2x: not in enabled drivers build config 00:21:26.868 net/bnxt: not in enabled drivers build config 00:21:26.868 net/bonding: not in enabled drivers build config 00:21:26.868 net/cnxk: not in enabled drivers build config 00:21:26.868 net/cpfl: not in enabled drivers build config 00:21:26.868 net/cxgbe: not in enabled drivers build config 00:21:26.868 net/dpaa: not in enabled drivers build config 00:21:26.868 net/dpaa2: not in enabled drivers build config 00:21:26.868 net/e1000: not in enabled drivers build config 00:21:26.868 net/ena: not in enabled drivers build config 00:21:26.868 net/enetc: not in enabled drivers build config 00:21:26.868 net/enetfec: not in enabled drivers build config 00:21:26.868 net/enic: not in enabled drivers build config 00:21:26.868 net/failsafe: not in enabled drivers build config 00:21:26.868 net/fm10k: not in enabled drivers build config 00:21:26.868 net/gve: not in enabled drivers build config 00:21:26.868 net/hinic: not in enabled drivers build config 00:21:26.868 net/hns3: not in enabled drivers build config 00:21:26.868 net/i40e: not in enabled drivers build config 00:21:26.868 net/iavf: not in enabled drivers build config 00:21:26.868 net/ice: not in enabled drivers build config 00:21:26.868 net/idpf: not in enabled drivers build config 00:21:26.868 net/igc: not in enabled drivers build config 00:21:26.868 net/ionic: not in enabled drivers build config 00:21:26.868 net/ipn3ke: not in enabled drivers build config 00:21:26.868 net/ixgbe: not in enabled drivers build config 00:21:26.868 net/mana: not in enabled drivers build config 00:21:26.868 net/memif: not in enabled drivers build config 00:21:26.868 net/mlx4: not in enabled drivers build config 00:21:26.868 net/mlx5: not in enabled drivers build config 00:21:26.868 net/mvneta: not in enabled drivers build config 00:21:26.868 net/mvpp2: not in enabled drivers build config 00:21:26.868 net/netvsc: not in enabled drivers build config 00:21:26.868 net/nfb: not in enabled drivers build config 00:21:26.868 net/nfp: not in enabled drivers build config 00:21:26.868 net/ngbe: not in enabled drivers build config 00:21:26.868 net/null: not in enabled drivers build config 00:21:26.868 net/octeontx: not in enabled drivers build config 00:21:26.868 net/octeon_ep: not in enabled drivers build config 00:21:26.868 net/pcap: not in enabled drivers build config 00:21:26.868 net/pfe: not in enabled drivers build config 00:21:26.868 net/qede: not in enabled drivers build config 00:21:26.868 net/ring: not in enabled drivers build config 00:21:26.868 net/sfc: not in enabled drivers build config 00:21:26.868 net/softnic: not in enabled drivers build config 00:21:26.868 net/tap: not in enabled drivers build config 00:21:26.868 net/thunderx: not in enabled drivers build config 00:21:26.868 net/txgbe: not in enabled drivers build config 00:21:26.868 net/vdev_netvsc: not in enabled drivers build config 00:21:26.868 net/vhost: not in enabled drivers build config 00:21:26.868 net/virtio: not in enabled drivers build config 00:21:26.868 net/vmxnet3: not in enabled drivers build config 00:21:26.868 raw/*: missing internal dependency, "rawdev" 00:21:26.868 crypto/armv8: not in enabled drivers build config 00:21:26.868 crypto/bcmfs: not in enabled drivers build config 00:21:26.868 crypto/caam_jr: not in enabled drivers build config 00:21:26.868 crypto/ccp: not in enabled drivers build config 00:21:26.868 crypto/cnxk: not in enabled drivers build config 00:21:26.868 crypto/dpaa_sec: not in enabled drivers build config 00:21:26.868 crypto/dpaa2_sec: not in enabled drivers build config 00:21:26.868 crypto/ipsec_mb: not in enabled drivers build config 00:21:26.868 crypto/mlx5: not in enabled drivers build config 00:21:26.868 crypto/mvsam: not in enabled drivers build config 00:21:26.868 crypto/nitrox: not in enabled drivers build config 00:21:26.868 crypto/null: not in enabled drivers build config 00:21:26.868 crypto/octeontx: not in enabled drivers build config 00:21:26.868 crypto/openssl: not in enabled drivers build config 00:21:26.868 crypto/scheduler: not in enabled drivers build config 00:21:26.868 crypto/uadk: not in enabled drivers build config 00:21:26.868 crypto/virtio: not in enabled drivers build config 00:21:26.868 compress/isal: not in enabled drivers build config 00:21:26.868 compress/mlx5: not in enabled drivers build config 00:21:26.868 compress/nitrox: not in enabled drivers build config 00:21:26.868 compress/octeontx: not in enabled drivers build config 00:21:26.868 compress/zlib: not in enabled drivers build config 00:21:26.868 regex/*: missing internal dependency, "regexdev" 00:21:26.868 ml/*: missing internal dependency, "mldev" 00:21:26.868 vdpa/ifc: not in enabled drivers build config 00:21:26.868 vdpa/mlx5: not in enabled drivers build config 00:21:26.868 vdpa/nfp: not in enabled drivers build config 00:21:26.868 vdpa/sfc: not in enabled drivers build config 00:21:26.868 event/*: missing internal dependency, "eventdev" 00:21:26.868 baseband/*: missing internal dependency, "bbdev" 00:21:26.868 gpu/*: missing internal dependency, "gpudev" 00:21:26.868 00:21:26.868 00:21:26.868 Build targets in project: 84 00:21:26.868 00:21:26.868 DPDK 24.03.0 00:21:26.868 00:21:26.868 User defined options 00:21:26.868 buildtype : debug 00:21:26.868 default_library : shared 00:21:26.868 libdir : lib 00:21:26.868 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:21:26.868 b_sanitize : address 00:21:26.868 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:21:26.868 c_link_args : 00:21:26.868 cpu_instruction_set: native 00:21:26.868 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:21:26.868 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:21:26.868 enable_docs : false 00:21:26.868 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:21:26.868 enable_kmods : false 00:21:26.868 max_lcores : 128 00:21:26.868 tests : false 00:21:26.868 00:21:26.868 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:21:26.868 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:21:26.868 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:21:26.868 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:21:26.868 [3/267] Linking static target lib/librte_kvargs.a 00:21:26.868 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:21:26.868 [5/267] Linking static target lib/librte_log.a 00:21:26.868 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:21:26.868 [7/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:21:26.868 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:21:26.868 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:21:26.868 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:21:26.868 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:21:26.868 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:21:26.868 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:21:26.868 [14/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:21:26.868 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:21:26.868 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:21:26.868 [17/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:21:26.868 [18/267] Linking static target lib/librte_telemetry.a 00:21:26.868 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:21:26.868 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:21:27.158 [21/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:21:27.158 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:21:27.158 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:21:27.158 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:21:27.158 [25/267] Linking target lib/librte_log.so.24.1 00:21:27.158 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:21:27.158 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:21:27.158 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:21:27.158 [29/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:21:27.416 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:21:27.416 [31/267] Linking target lib/librte_kvargs.so.24.1 00:21:27.416 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:21:27.416 [33/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:21:27.416 [34/267] Linking target lib/librte_telemetry.so.24.1 00:21:27.416 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:21:27.416 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:21:27.416 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:21:27.416 [38/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:21:27.416 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:21:27.416 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:21:27.416 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:21:27.416 [42/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:21:27.674 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:21:27.674 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:21:27.674 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:21:27.674 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:21:27.932 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:21:27.932 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:21:27.932 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:21:27.932 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:21:27.932 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:21:27.932 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:21:28.190 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:21:28.190 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:21:28.190 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:21:28.190 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:21:28.190 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:21:28.190 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:21:28.190 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:21:28.456 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:21:28.456 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:21:28.456 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:21:28.456 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:21:28.456 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:21:28.456 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:21:28.456 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:21:28.456 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:21:28.719 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:21:28.719 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:21:28.719 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:21:28.719 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:21:28.719 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:21:28.719 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:21:28.719 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:21:28.719 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:21:28.977 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:21:28.977 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:21:28.977 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:21:28.977 [79/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:21:28.977 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:21:29.235 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:21:29.235 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:21:29.235 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:21:29.235 [84/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:21:29.235 [85/267] Linking static target lib/librte_ring.a 00:21:29.235 [86/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:21:29.235 [87/267] Linking static target lib/librte_eal.a 00:21:29.494 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:21:29.494 [89/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:21:29.494 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:21:29.494 [91/267] Linking static target lib/librte_rcu.a 00:21:29.494 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:21:29.494 [93/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:21:29.494 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:21:29.494 [95/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:21:29.752 [96/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:21:29.752 [97/267] Linking static target lib/librte_mempool.a 00:21:29.752 [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:21:30.010 [99/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:21:30.010 [100/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:21:30.010 [101/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:21:30.010 [102/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:21:30.010 [103/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:21:30.010 [104/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:21:30.010 [105/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:21:30.268 [106/267] Linking static target lib/librte_net.a 00:21:30.268 [107/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:21:30.268 [108/267] Linking static target lib/librte_mbuf.a 00:21:30.268 [109/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:21:30.268 [110/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:21:30.268 [111/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:21:30.268 [112/267] Linking static target lib/librte_meter.a 00:21:30.268 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:21:30.527 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:21:30.527 [115/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:21:30.527 [116/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:21:30.527 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:21:30.793 [118/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:21:30.793 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:21:30.793 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:21:31.059 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:21:31.059 [122/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:21:31.059 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:21:31.317 [124/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:21:31.317 [125/267] Linking static target lib/librte_pci.a 00:21:31.317 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:21:31.317 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:21:31.317 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:21:31.317 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:21:31.317 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:21:31.317 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:21:31.317 [132/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:21:31.317 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:21:31.575 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:21:31.575 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:21:31.575 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:21:31.575 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:21:31.575 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:21:31.575 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:21:31.575 [140/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:21:31.575 [141/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:21:31.575 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:21:31.575 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:21:31.575 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:21:31.575 [145/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:21:31.575 [146/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:21:31.575 [147/267] Linking static target lib/librte_cmdline.a 00:21:31.833 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:21:31.833 [149/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:21:31.833 [150/267] Linking static target lib/librte_timer.a 00:21:31.833 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:21:32.091 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:21:32.091 [153/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:21:32.091 [154/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:21:32.091 [155/267] Linking static target lib/librte_ethdev.a 00:21:32.348 [156/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:21:32.348 [157/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:21:32.348 [158/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:21:32.348 [159/267] Linking static target lib/librte_compressdev.a 00:21:32.348 [160/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:21:32.348 [161/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:21:32.348 [162/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:21:32.606 [163/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:21:32.606 [164/267] Linking static target lib/librte_hash.a 00:21:32.606 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:21:32.606 [166/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:21:32.606 [167/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:21:32.606 [168/267] Linking static target lib/librte_dmadev.a 00:21:32.606 [169/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:21:32.864 [170/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:21:32.864 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:21:32.864 [172/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:21:32.864 [173/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:21:32.864 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:21:33.121 [175/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:21:33.121 [176/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:21:33.121 [177/267] Linking static target lib/librte_cryptodev.a 00:21:33.121 [178/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:21:33.121 [179/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:21:33.386 [180/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:21:33.386 [181/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:21:33.386 [182/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:21:33.386 [183/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:21:33.386 [184/267] Linking static target lib/librte_power.a 00:21:33.386 [185/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:21:33.386 [186/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:21:33.386 [187/267] Linking static target lib/librte_reorder.a 00:21:33.644 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:21:33.644 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:21:33.644 [190/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:21:33.644 [191/267] Linking static target lib/librte_security.a 00:21:33.901 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:21:33.901 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:21:34.159 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:21:34.159 [195/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:21:34.159 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:21:34.416 [197/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:21:34.416 [198/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:21:34.416 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:21:34.674 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:21:34.674 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:21:34.674 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:21:34.674 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:21:34.674 [204/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:21:34.674 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:21:34.932 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:21:34.932 [207/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:21:34.932 [208/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:21:34.932 [209/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:21:35.189 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:21:35.189 [211/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:21:35.189 [212/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:21:35.189 [213/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:21:35.189 [214/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:21:35.189 [215/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:21:35.189 [216/267] Linking static target drivers/librte_bus_pci.a 00:21:35.189 [217/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:21:35.189 [218/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:21:35.189 [219/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:21:35.189 [220/267] Linking static target drivers/librte_bus_vdev.a 00:21:35.446 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:21:35.446 [222/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:21:35.446 [223/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:21:35.446 [224/267] Linking static target drivers/librte_mempool_ring.a 00:21:35.446 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:21:35.704 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:21:35.704 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:21:37.075 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:21:37.075 [229/267] Linking target lib/librte_eal.so.24.1 00:21:37.075 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:21:37.075 [231/267] Linking target lib/librte_pci.so.24.1 00:21:37.075 [232/267] Linking target lib/librte_timer.so.24.1 00:21:37.075 [233/267] Linking target lib/librte_meter.so.24.1 00:21:37.075 [234/267] Linking target lib/librte_ring.so.24.1 00:21:37.075 [235/267] Linking target lib/librte_dmadev.so.24.1 00:21:37.075 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:21:37.332 [237/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:21:37.332 [238/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:21:37.332 [239/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:21:37.332 [240/267] Linking target drivers/librte_bus_pci.so.24.1 00:21:37.332 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:21:37.332 [242/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:21:37.332 [243/267] Linking target lib/librte_rcu.so.24.1 00:21:37.332 [244/267] Linking target lib/librte_mempool.so.24.1 00:21:37.332 [245/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:21:37.332 [246/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:21:37.332 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:21:37.625 [248/267] Linking target lib/librte_mbuf.so.24.1 00:21:37.625 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:21:37.625 [250/267] Linking target lib/librte_compressdev.so.24.1 00:21:37.625 [251/267] Linking target lib/librte_net.so.24.1 00:21:37.625 [252/267] Linking target lib/librte_cryptodev.so.24.1 00:21:37.625 [253/267] Linking target lib/librte_reorder.so.24.1 00:21:37.625 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:21:37.625 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:21:37.625 [256/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:21:37.625 [257/267] Linking target lib/librte_cmdline.so.24.1 00:21:37.625 [258/267] Linking target lib/librte_hash.so.24.1 00:21:37.625 [259/267] Linking target lib/librte_security.so.24.1 00:21:37.883 [260/267] Linking target lib/librte_ethdev.so.24.1 00:21:37.883 [261/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:21:37.883 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:21:37.883 [263/267] Linking target lib/librte_power.so.24.1 00:21:38.816 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:21:38.816 [265/267] Linking static target lib/librte_vhost.a 00:21:39.748 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:21:40.006 [267/267] Linking target lib/librte_vhost.so.24.1 00:21:40.006 INFO: autodetecting backend as ninja 00:21:40.006 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:21:54.911 CC lib/ut/ut.o 00:21:54.911 CC lib/log/log.o 00:21:54.911 CC lib/log/log_deprecated.o 00:21:54.911 CC lib/log/log_flags.o 00:21:54.911 CC lib/ut_mock/mock.o 00:21:54.911 LIB libspdk_ut.a 00:21:54.911 LIB libspdk_ut_mock.a 00:21:54.911 SO libspdk_ut_mock.so.6.0 00:21:54.911 SO libspdk_ut.so.2.0 00:21:54.911 LIB libspdk_log.a 00:21:54.911 SYMLINK libspdk_ut.so 00:21:54.911 SYMLINK libspdk_ut_mock.so 00:21:54.911 SO libspdk_log.so.7.1 00:21:54.911 SYMLINK libspdk_log.so 00:21:54.911 CC lib/ioat/ioat.o 00:21:54.911 CC lib/dma/dma.o 00:21:54.911 CXX lib/trace_parser/trace.o 00:21:54.911 CC lib/util/base64.o 00:21:54.911 CC lib/util/cpuset.o 00:21:54.911 CC lib/util/bit_array.o 00:21:54.911 CC lib/util/crc16.o 00:21:54.911 CC lib/util/crc32.o 00:21:54.911 CC lib/util/crc32c.o 00:21:54.911 CC lib/vfio_user/host/vfio_user_pci.o 00:21:54.911 CC lib/util/crc32_ieee.o 00:21:54.911 CC lib/util/crc64.o 00:21:54.911 CC lib/util/dif.o 00:21:54.911 CC lib/util/fd.o 00:21:54.911 LIB libspdk_dma.a 00:21:54.911 CC lib/util/fd_group.o 00:21:54.911 SO libspdk_dma.so.5.0 00:21:54.911 CC lib/util/file.o 00:21:54.911 CC lib/util/hexlify.o 00:21:54.911 CC lib/util/iov.o 00:21:54.911 CC lib/util/math.o 00:21:54.911 SYMLINK libspdk_dma.so 00:21:54.911 CC lib/util/net.o 00:21:54.911 LIB libspdk_ioat.a 00:21:54.911 SO libspdk_ioat.so.7.0 00:21:54.911 CC lib/util/pipe.o 00:21:54.911 CC lib/util/strerror_tls.o 00:21:54.911 CC lib/util/string.o 00:21:54.911 SYMLINK libspdk_ioat.so 00:21:54.911 CC lib/util/uuid.o 00:21:54.911 CC lib/vfio_user/host/vfio_user.o 00:21:54.911 CC lib/util/xor.o 00:21:54.911 CC lib/util/zipf.o 00:21:54.911 CC lib/util/md5.o 00:21:54.911 LIB libspdk_vfio_user.a 00:21:54.911 SO libspdk_vfio_user.so.5.0 00:21:54.911 SYMLINK libspdk_vfio_user.so 00:21:54.911 LIB libspdk_util.a 00:21:54.911 SO libspdk_util.so.10.1 00:21:54.911 LIB libspdk_trace_parser.a 00:21:54.911 SO libspdk_trace_parser.so.6.0 00:21:54.911 SYMLINK libspdk_util.so 00:21:54.911 SYMLINK libspdk_trace_parser.so 00:21:54.911 CC lib/rdma_provider/common.o 00:21:54.911 CC lib/rdma_provider/rdma_provider_verbs.o 00:21:54.911 CC lib/rdma_utils/rdma_utils.o 00:21:54.911 CC lib/idxd/idxd.o 00:21:54.911 CC lib/idxd/idxd_user.o 00:21:54.911 CC lib/idxd/idxd_kernel.o 00:21:54.911 CC lib/conf/conf.o 00:21:54.911 CC lib/env_dpdk/env.o 00:21:54.911 CC lib/json/json_parse.o 00:21:54.911 CC lib/vmd/vmd.o 00:21:54.911 CC lib/vmd/led.o 00:21:54.911 CC lib/env_dpdk/memory.o 00:21:54.911 LIB libspdk_rdma_provider.a 00:21:54.911 CC lib/env_dpdk/pci.o 00:21:54.911 SO libspdk_rdma_provider.so.6.0 00:21:54.911 LIB libspdk_conf.a 00:21:54.911 CC lib/env_dpdk/init.o 00:21:54.911 CC lib/json/json_util.o 00:21:54.911 SO libspdk_conf.so.6.0 00:21:54.911 LIB libspdk_rdma_utils.a 00:21:54.911 SYMLINK libspdk_rdma_provider.so 00:21:54.911 CC lib/json/json_write.o 00:21:54.911 SO libspdk_rdma_utils.so.1.0 00:21:54.911 SYMLINK libspdk_conf.so 00:21:54.911 CC lib/env_dpdk/threads.o 00:21:54.911 SYMLINK libspdk_rdma_utils.so 00:21:54.911 CC lib/env_dpdk/pci_ioat.o 00:21:54.911 CC lib/env_dpdk/pci_virtio.o 00:21:54.911 CC lib/env_dpdk/pci_vmd.o 00:21:54.911 CC lib/env_dpdk/pci_idxd.o 00:21:54.911 CC lib/env_dpdk/pci_event.o 00:21:54.911 CC lib/env_dpdk/sigbus_handler.o 00:21:54.911 LIB libspdk_json.a 00:21:54.911 CC lib/env_dpdk/pci_dpdk.o 00:21:54.911 CC lib/env_dpdk/pci_dpdk_2207.o 00:21:54.911 CC lib/env_dpdk/pci_dpdk_2211.o 00:21:54.911 SO libspdk_json.so.6.0 00:21:54.911 SYMLINK libspdk_json.so 00:21:55.169 LIB libspdk_idxd.a 00:21:55.169 SO libspdk_idxd.so.12.1 00:21:55.169 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:21:55.169 SYMLINK libspdk_idxd.so 00:21:55.169 CC lib/jsonrpc/jsonrpc_client.o 00:21:55.169 CC lib/jsonrpc/jsonrpc_server.o 00:21:55.169 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:21:55.169 LIB libspdk_vmd.a 00:21:55.169 SO libspdk_vmd.so.6.0 00:21:55.169 SYMLINK libspdk_vmd.so 00:21:55.465 LIB libspdk_jsonrpc.a 00:21:55.465 SO libspdk_jsonrpc.so.6.0 00:21:55.465 SYMLINK libspdk_jsonrpc.so 00:21:55.723 CC lib/rpc/rpc.o 00:21:55.723 LIB libspdk_env_dpdk.a 00:21:55.980 SO libspdk_env_dpdk.so.15.1 00:21:55.980 LIB libspdk_rpc.a 00:21:55.980 SO libspdk_rpc.so.6.0 00:21:55.980 SYMLINK libspdk_env_dpdk.so 00:21:55.980 SYMLINK libspdk_rpc.so 00:21:56.239 CC lib/notify/notify_rpc.o 00:21:56.239 CC lib/notify/notify.o 00:21:56.239 CC lib/keyring/keyring.o 00:21:56.239 CC lib/keyring/keyring_rpc.o 00:21:56.239 CC lib/trace/trace.o 00:21:56.239 CC lib/trace/trace_flags.o 00:21:56.239 CC lib/trace/trace_rpc.o 00:21:56.239 LIB libspdk_notify.a 00:21:56.239 SO libspdk_notify.so.6.0 00:21:56.497 SYMLINK libspdk_notify.so 00:21:56.497 LIB libspdk_trace.a 00:21:56.497 LIB libspdk_keyring.a 00:21:56.497 SO libspdk_trace.so.11.0 00:21:56.497 SO libspdk_keyring.so.2.0 00:21:56.497 SYMLINK libspdk_trace.so 00:21:56.497 SYMLINK libspdk_keyring.so 00:21:56.754 CC lib/sock/sock.o 00:21:56.754 CC lib/sock/sock_rpc.o 00:21:56.754 CC lib/thread/thread.o 00:21:56.754 CC lib/thread/iobuf.o 00:21:57.012 LIB libspdk_sock.a 00:21:57.012 SO libspdk_sock.so.10.0 00:21:57.270 SYMLINK libspdk_sock.so 00:21:57.270 CC lib/nvme/nvme_ctrlr.o 00:21:57.270 CC lib/nvme/nvme_ctrlr_cmd.o 00:21:57.270 CC lib/nvme/nvme_pcie_common.o 00:21:57.270 CC lib/nvme/nvme_ns_cmd.o 00:21:57.270 CC lib/nvme/nvme_ns.o 00:21:57.270 CC lib/nvme/nvme_fabric.o 00:21:57.270 CC lib/nvme/nvme_pcie.o 00:21:57.270 CC lib/nvme/nvme_qpair.o 00:21:57.270 CC lib/nvme/nvme.o 00:21:57.834 CC lib/nvme/nvme_quirks.o 00:21:58.092 LIB libspdk_thread.a 00:21:58.092 SO libspdk_thread.so.11.0 00:21:58.092 CC lib/nvme/nvme_transport.o 00:21:58.092 CC lib/nvme/nvme_discovery.o 00:21:58.092 SYMLINK libspdk_thread.so 00:21:58.092 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:21:58.092 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:21:58.092 CC lib/nvme/nvme_tcp.o 00:21:58.092 CC lib/nvme/nvme_opal.o 00:21:58.092 CC lib/nvme/nvme_io_msg.o 00:21:58.351 CC lib/nvme/nvme_poll_group.o 00:21:58.351 CC lib/nvme/nvme_zns.o 00:21:58.608 CC lib/nvme/nvme_stubs.o 00:21:58.608 CC lib/nvme/nvme_auth.o 00:21:58.608 CC lib/nvme/nvme_cuse.o 00:21:58.608 CC lib/nvme/nvme_rdma.o 00:21:58.876 CC lib/accel/accel.o 00:21:58.876 CC lib/blob/blobstore.o 00:21:58.876 CC lib/init/json_config.o 00:21:59.135 CC lib/init/subsystem.o 00:21:59.135 CC lib/virtio/virtio.o 00:21:59.135 CC lib/init/subsystem_rpc.o 00:21:59.135 CC lib/init/rpc.o 00:21:59.394 CC lib/virtio/virtio_vhost_user.o 00:21:59.394 CC lib/fsdev/fsdev.o 00:21:59.394 CC lib/virtio/virtio_vfio_user.o 00:21:59.394 CC lib/virtio/virtio_pci.o 00:21:59.394 LIB libspdk_init.a 00:21:59.394 SO libspdk_init.so.6.0 00:21:59.394 SYMLINK libspdk_init.so 00:21:59.394 CC lib/blob/request.o 00:21:59.655 CC lib/blob/zeroes.o 00:21:59.655 CC lib/blob/blob_bs_dev.o 00:21:59.655 LIB libspdk_virtio.a 00:21:59.655 CC lib/fsdev/fsdev_io.o 00:21:59.655 CC lib/fsdev/fsdev_rpc.o 00:21:59.655 SO libspdk_virtio.so.7.0 00:21:59.655 CC lib/accel/accel_rpc.o 00:21:59.655 SYMLINK libspdk_virtio.so 00:21:59.655 CC lib/accel/accel_sw.o 00:21:59.655 CC lib/event/app.o 00:21:59.655 CC lib/event/reactor.o 00:21:59.655 CC lib/event/log_rpc.o 00:21:59.917 CC lib/event/app_rpc.o 00:21:59.917 CC lib/event/scheduler_static.o 00:21:59.917 LIB libspdk_accel.a 00:21:59.917 LIB libspdk_fsdev.a 00:21:59.917 LIB libspdk_nvme.a 00:21:59.917 SO libspdk_accel.so.16.0 00:21:59.917 SO libspdk_fsdev.so.2.0 00:22:00.178 SYMLINK libspdk_fsdev.so 00:22:00.178 SYMLINK libspdk_accel.so 00:22:00.178 SO libspdk_nvme.so.15.0 00:22:00.178 LIB libspdk_event.a 00:22:00.178 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:22:00.178 SO libspdk_event.so.14.0 00:22:00.178 CC lib/bdev/bdev.o 00:22:00.178 CC lib/bdev/bdev_zone.o 00:22:00.178 CC lib/bdev/bdev_rpc.o 00:22:00.178 CC lib/bdev/part.o 00:22:00.178 CC lib/bdev/scsi_nvme.o 00:22:00.435 SYMLINK libspdk_nvme.so 00:22:00.435 SYMLINK libspdk_event.so 00:22:01.001 LIB libspdk_fuse_dispatcher.a 00:22:01.001 SO libspdk_fuse_dispatcher.so.1.0 00:22:01.001 SYMLINK libspdk_fuse_dispatcher.so 00:22:02.373 LIB libspdk_blob.a 00:22:02.373 SO libspdk_blob.so.11.0 00:22:02.373 SYMLINK libspdk_blob.so 00:22:02.632 CC lib/lvol/lvol.o 00:22:02.632 CC lib/blobfs/blobfs.o 00:22:02.632 CC lib/blobfs/tree.o 00:22:03.198 LIB libspdk_bdev.a 00:22:03.198 SO libspdk_bdev.so.17.0 00:22:03.198 SYMLINK libspdk_bdev.so 00:22:03.456 CC lib/ftl/ftl_core.o 00:22:03.456 CC lib/ftl/ftl_init.o 00:22:03.456 CC lib/ftl/ftl_layout.o 00:22:03.456 CC lib/ftl/ftl_debug.o 00:22:03.456 CC lib/scsi/dev.o 00:22:03.456 CC lib/ublk/ublk.o 00:22:03.456 CC lib/nbd/nbd.o 00:22:03.456 CC lib/nvmf/ctrlr.o 00:22:03.456 LIB libspdk_blobfs.a 00:22:03.456 SO libspdk_blobfs.so.10.0 00:22:03.456 SYMLINK libspdk_blobfs.so 00:22:03.456 CC lib/nvmf/ctrlr_discovery.o 00:22:03.456 LIB libspdk_lvol.a 00:22:03.733 CC lib/ublk/ublk_rpc.o 00:22:03.733 CC lib/scsi/lun.o 00:22:03.733 SO libspdk_lvol.so.10.0 00:22:03.733 SYMLINK libspdk_lvol.so 00:22:03.733 CC lib/ftl/ftl_io.o 00:22:03.733 CC lib/nvmf/ctrlr_bdev.o 00:22:03.733 CC lib/ftl/ftl_sb.o 00:22:03.733 CC lib/nbd/nbd_rpc.o 00:22:03.733 CC lib/scsi/port.o 00:22:04.012 CC lib/ftl/ftl_l2p.o 00:22:04.012 CC lib/nvmf/subsystem.o 00:22:04.012 LIB libspdk_nbd.a 00:22:04.012 CC lib/nvmf/nvmf.o 00:22:04.012 CC lib/scsi/scsi.o 00:22:04.012 SO libspdk_nbd.so.7.0 00:22:04.012 CC lib/ftl/ftl_l2p_flat.o 00:22:04.012 SYMLINK libspdk_nbd.so 00:22:04.012 CC lib/ftl/ftl_nv_cache.o 00:22:04.012 CC lib/ftl/ftl_band.o 00:22:04.012 CC lib/ftl/ftl_band_ops.o 00:22:04.012 CC lib/scsi/scsi_bdev.o 00:22:04.012 LIB libspdk_ublk.a 00:22:04.270 SO libspdk_ublk.so.3.0 00:22:04.270 CC lib/ftl/ftl_writer.o 00:22:04.270 SYMLINK libspdk_ublk.so 00:22:04.270 CC lib/nvmf/nvmf_rpc.o 00:22:04.270 CC lib/ftl/ftl_rq.o 00:22:04.532 CC lib/scsi/scsi_pr.o 00:22:04.532 CC lib/scsi/scsi_rpc.o 00:22:04.532 CC lib/nvmf/transport.o 00:22:04.532 CC lib/scsi/task.o 00:22:04.532 CC lib/ftl/ftl_reloc.o 00:22:04.532 CC lib/nvmf/tcp.o 00:22:04.790 CC lib/nvmf/stubs.o 00:22:04.790 LIB libspdk_scsi.a 00:22:04.790 SO libspdk_scsi.so.9.0 00:22:04.790 SYMLINK libspdk_scsi.so 00:22:04.790 CC lib/nvmf/mdns_server.o 00:22:05.064 CC lib/iscsi/conn.o 00:22:05.064 CC lib/vhost/vhost.o 00:22:05.064 CC lib/vhost/vhost_rpc.o 00:22:05.064 CC lib/vhost/vhost_scsi.o 00:22:05.064 CC lib/vhost/vhost_blk.o 00:22:05.064 CC lib/vhost/rte_vhost_user.o 00:22:05.323 CC lib/ftl/ftl_l2p_cache.o 00:22:05.323 CC lib/nvmf/rdma.o 00:22:05.323 CC lib/nvmf/auth.o 00:22:05.580 CC lib/iscsi/init_grp.o 00:22:05.580 CC lib/ftl/ftl_p2l.o 00:22:05.580 CC lib/iscsi/iscsi.o 00:22:05.580 CC lib/iscsi/param.o 00:22:05.838 CC lib/iscsi/portal_grp.o 00:22:05.838 CC lib/ftl/ftl_p2l_log.o 00:22:05.838 CC lib/iscsi/tgt_node.o 00:22:05.838 CC lib/iscsi/iscsi_subsystem.o 00:22:06.096 CC lib/iscsi/iscsi_rpc.o 00:22:06.096 CC lib/iscsi/task.o 00:22:06.096 LIB libspdk_vhost.a 00:22:06.096 CC lib/ftl/mngt/ftl_mngt.o 00:22:06.096 SO libspdk_vhost.so.8.0 00:22:06.096 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:22:06.096 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:22:06.353 SYMLINK libspdk_vhost.so 00:22:06.353 CC lib/ftl/mngt/ftl_mngt_startup.o 00:22:06.353 CC lib/ftl/mngt/ftl_mngt_md.o 00:22:06.353 CC lib/ftl/mngt/ftl_mngt_misc.o 00:22:06.353 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:22:06.353 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:22:06.353 CC lib/ftl/mngt/ftl_mngt_band.o 00:22:06.353 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:22:06.353 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:22:06.353 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:22:06.610 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:22:06.610 CC lib/ftl/utils/ftl_conf.o 00:22:06.610 CC lib/ftl/utils/ftl_md.o 00:22:06.610 CC lib/ftl/utils/ftl_mempool.o 00:22:06.610 CC lib/ftl/utils/ftl_bitmap.o 00:22:06.610 CC lib/ftl/utils/ftl_property.o 00:22:06.610 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:22:06.867 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:22:06.867 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:22:06.867 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:22:06.867 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:22:06.867 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:22:06.867 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:22:06.867 CC lib/ftl/upgrade/ftl_sb_v3.o 00:22:06.867 CC lib/ftl/upgrade/ftl_sb_v5.o 00:22:06.867 CC lib/ftl/nvc/ftl_nvc_dev.o 00:22:06.867 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:22:07.124 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:22:07.124 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:22:07.124 CC lib/ftl/base/ftl_base_dev.o 00:22:07.124 CC lib/ftl/base/ftl_base_bdev.o 00:22:07.124 CC lib/ftl/ftl_trace.o 00:22:07.124 LIB libspdk_iscsi.a 00:22:07.124 SO libspdk_iscsi.so.8.0 00:22:07.382 LIB libspdk_ftl.a 00:22:07.382 SYMLINK libspdk_iscsi.so 00:22:07.382 LIB libspdk_nvmf.a 00:22:07.382 SO libspdk_nvmf.so.20.0 00:22:07.382 SO libspdk_ftl.so.9.0 00:22:07.639 SYMLINK libspdk_nvmf.so 00:22:07.639 SYMLINK libspdk_ftl.so 00:22:07.897 CC module/env_dpdk/env_dpdk_rpc.o 00:22:08.154 CC module/keyring/linux/keyring.o 00:22:08.154 CC module/keyring/file/keyring.o 00:22:08.154 CC module/accel/dsa/accel_dsa.o 00:22:08.154 CC module/scheduler/dynamic/scheduler_dynamic.o 00:22:08.154 CC module/accel/ioat/accel_ioat.o 00:22:08.154 CC module/accel/error/accel_error.o 00:22:08.154 CC module/blob/bdev/blob_bdev.o 00:22:08.154 CC module/sock/posix/posix.o 00:22:08.154 CC module/fsdev/aio/fsdev_aio.o 00:22:08.154 LIB libspdk_env_dpdk_rpc.a 00:22:08.154 SO libspdk_env_dpdk_rpc.so.6.0 00:22:08.154 CC module/keyring/linux/keyring_rpc.o 00:22:08.154 CC module/keyring/file/keyring_rpc.o 00:22:08.154 SYMLINK libspdk_env_dpdk_rpc.so 00:22:08.154 CC module/fsdev/aio/fsdev_aio_rpc.o 00:22:08.154 LIB libspdk_scheduler_dynamic.a 00:22:08.154 LIB libspdk_keyring_linux.a 00:22:08.154 CC module/accel/ioat/accel_ioat_rpc.o 00:22:08.154 SO libspdk_scheduler_dynamic.so.4.0 00:22:08.154 SO libspdk_keyring_linux.so.1.0 00:22:08.427 LIB libspdk_keyring_file.a 00:22:08.427 SYMLINK libspdk_keyring_linux.so 00:22:08.427 SYMLINK libspdk_scheduler_dynamic.so 00:22:08.427 CC module/fsdev/aio/linux_aio_mgr.o 00:22:08.427 SO libspdk_keyring_file.so.2.0 00:22:08.427 CC module/accel/dsa/accel_dsa_rpc.o 00:22:08.427 LIB libspdk_blob_bdev.a 00:22:08.427 CC module/accel/error/accel_error_rpc.o 00:22:08.427 LIB libspdk_accel_ioat.a 00:22:08.427 SO libspdk_blob_bdev.so.11.0 00:22:08.427 SYMLINK libspdk_keyring_file.so 00:22:08.427 SO libspdk_accel_ioat.so.6.0 00:22:08.427 SYMLINK libspdk_blob_bdev.so 00:22:08.427 LIB libspdk_accel_dsa.a 00:22:08.427 SYMLINK libspdk_accel_ioat.so 00:22:08.427 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:22:08.427 SO libspdk_accel_dsa.so.5.0 00:22:08.427 LIB libspdk_accel_error.a 00:22:08.427 SYMLINK libspdk_accel_dsa.so 00:22:08.427 SO libspdk_accel_error.so.2.0 00:22:08.427 CC module/accel/iaa/accel_iaa.o 00:22:08.427 CC module/scheduler/gscheduler/gscheduler.o 00:22:08.427 SYMLINK libspdk_accel_error.so 00:22:08.427 CC module/accel/iaa/accel_iaa_rpc.o 00:22:08.685 LIB libspdk_scheduler_dpdk_governor.a 00:22:08.685 SO libspdk_scheduler_dpdk_governor.so.4.0 00:22:08.685 CC module/bdev/delay/vbdev_delay.o 00:22:08.685 CC module/bdev/gpt/gpt.o 00:22:08.685 CC module/bdev/error/vbdev_error.o 00:22:08.685 CC module/blobfs/bdev/blobfs_bdev.o 00:22:08.685 SYMLINK libspdk_scheduler_dpdk_governor.so 00:22:08.685 CC module/bdev/gpt/vbdev_gpt.o 00:22:08.685 LIB libspdk_scheduler_gscheduler.a 00:22:08.685 SO libspdk_scheduler_gscheduler.so.4.0 00:22:08.685 LIB libspdk_accel_iaa.a 00:22:08.685 SO libspdk_accel_iaa.so.3.0 00:22:08.685 SYMLINK libspdk_scheduler_gscheduler.so 00:22:08.685 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:22:08.685 CC module/bdev/error/vbdev_error_rpc.o 00:22:08.685 SYMLINK libspdk_accel_iaa.so 00:22:08.685 LIB libspdk_sock_posix.a 00:22:08.942 SO libspdk_sock_posix.so.6.0 00:22:08.942 CC module/bdev/lvol/vbdev_lvol.o 00:22:08.942 CC module/bdev/malloc/bdev_malloc.o 00:22:08.942 LIB libspdk_fsdev_aio.a 00:22:08.942 CC module/bdev/delay/vbdev_delay_rpc.o 00:22:08.942 SYMLINK libspdk_sock_posix.so 00:22:08.942 CC module/bdev/null/bdev_null.o 00:22:08.942 LIB libspdk_bdev_gpt.a 00:22:08.942 LIB libspdk_blobfs_bdev.a 00:22:08.942 SO libspdk_fsdev_aio.so.1.0 00:22:08.942 LIB libspdk_bdev_error.a 00:22:08.942 SO libspdk_blobfs_bdev.so.6.0 00:22:08.942 SO libspdk_bdev_gpt.so.6.0 00:22:08.942 SO libspdk_bdev_error.so.6.0 00:22:08.942 CC module/bdev/nvme/bdev_nvme.o 00:22:08.942 SYMLINK libspdk_fsdev_aio.so 00:22:08.942 CC module/bdev/nvme/bdev_nvme_rpc.o 00:22:08.942 SYMLINK libspdk_bdev_gpt.so 00:22:08.942 SYMLINK libspdk_bdev_error.so 00:22:08.942 SYMLINK libspdk_blobfs_bdev.so 00:22:08.942 CC module/bdev/nvme/nvme_rpc.o 00:22:08.942 CC module/bdev/nvme/bdev_mdns_client.o 00:22:08.942 CC module/bdev/null/bdev_null_rpc.o 00:22:09.199 LIB libspdk_bdev_delay.a 00:22:09.199 CC module/bdev/passthru/vbdev_passthru.o 00:22:09.199 SO libspdk_bdev_delay.so.6.0 00:22:09.199 SYMLINK libspdk_bdev_delay.so 00:22:09.199 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:22:09.199 LIB libspdk_bdev_null.a 00:22:09.199 SO libspdk_bdev_null.so.6.0 00:22:09.199 SYMLINK libspdk_bdev_null.so 00:22:09.199 CC module/bdev/malloc/bdev_malloc_rpc.o 00:22:09.199 CC module/bdev/raid/bdev_raid.o 00:22:09.199 CC module/bdev/split/vbdev_split.o 00:22:09.457 CC module/bdev/zone_block/vbdev_zone_block.o 00:22:09.457 LIB libspdk_bdev_passthru.a 00:22:09.457 CC module/bdev/aio/bdev_aio.o 00:22:09.457 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:22:09.457 SO libspdk_bdev_passthru.so.6.0 00:22:09.457 LIB libspdk_bdev_malloc.a 00:22:09.457 SO libspdk_bdev_malloc.so.6.0 00:22:09.457 SYMLINK libspdk_bdev_passthru.so 00:22:09.457 CC module/bdev/split/vbdev_split_rpc.o 00:22:09.457 CC module/bdev/aio/bdev_aio_rpc.o 00:22:09.457 CC module/bdev/ftl/bdev_ftl.o 00:22:09.457 SYMLINK libspdk_bdev_malloc.so 00:22:09.457 CC module/bdev/nvme/vbdev_opal.o 00:22:09.715 LIB libspdk_bdev_split.a 00:22:09.715 CC module/bdev/nvme/vbdev_opal_rpc.o 00:22:09.715 SO libspdk_bdev_split.so.6.0 00:22:09.715 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:22:09.715 LIB libspdk_bdev_lvol.a 00:22:09.715 SYMLINK libspdk_bdev_split.so 00:22:09.715 CC module/bdev/raid/bdev_raid_rpc.o 00:22:09.715 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:22:09.715 SO libspdk_bdev_lvol.so.6.0 00:22:09.715 LIB libspdk_bdev_aio.a 00:22:09.715 SO libspdk_bdev_aio.so.6.0 00:22:09.715 CC module/bdev/ftl/bdev_ftl_rpc.o 00:22:09.715 CC module/bdev/raid/bdev_raid_sb.o 00:22:09.715 SYMLINK libspdk_bdev_lvol.so 00:22:09.715 CC module/bdev/raid/raid0.o 00:22:09.715 SYMLINK libspdk_bdev_aio.so 00:22:09.715 CC module/bdev/raid/raid1.o 00:22:10.065 LIB libspdk_bdev_zone_block.a 00:22:10.065 SO libspdk_bdev_zone_block.so.6.0 00:22:10.065 CC module/bdev/raid/concat.o 00:22:10.065 SYMLINK libspdk_bdev_zone_block.so 00:22:10.065 CC module/bdev/raid/raid5f.o 00:22:10.065 CC module/bdev/iscsi/bdev_iscsi.o 00:22:10.065 CC module/bdev/virtio/bdev_virtio_scsi.o 00:22:10.065 LIB libspdk_bdev_ftl.a 00:22:10.065 SO libspdk_bdev_ftl.so.6.0 00:22:10.065 CC module/bdev/virtio/bdev_virtio_blk.o 00:22:10.065 SYMLINK libspdk_bdev_ftl.so 00:22:10.065 CC module/bdev/virtio/bdev_virtio_rpc.o 00:22:10.065 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:22:10.323 LIB libspdk_bdev_iscsi.a 00:22:10.323 SO libspdk_bdev_iscsi.so.6.0 00:22:10.323 SYMLINK libspdk_bdev_iscsi.so 00:22:10.580 LIB libspdk_bdev_raid.a 00:22:10.580 SO libspdk_bdev_raid.so.6.0 00:22:10.580 LIB libspdk_bdev_virtio.a 00:22:10.580 SYMLINK libspdk_bdev_raid.so 00:22:10.580 SO libspdk_bdev_virtio.so.6.0 00:22:10.581 SYMLINK libspdk_bdev_virtio.so 00:22:11.512 LIB libspdk_bdev_nvme.a 00:22:11.512 SO libspdk_bdev_nvme.so.7.1 00:22:11.771 SYMLINK libspdk_bdev_nvme.so 00:22:12.029 CC module/event/subsystems/scheduler/scheduler.o 00:22:12.029 CC module/event/subsystems/fsdev/fsdev.o 00:22:12.029 CC module/event/subsystems/sock/sock.o 00:22:12.029 CC module/event/subsystems/vmd/vmd_rpc.o 00:22:12.029 CC module/event/subsystems/vmd/vmd.o 00:22:12.029 CC module/event/subsystems/iobuf/iobuf.o 00:22:12.029 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:22:12.029 CC module/event/subsystems/keyring/keyring.o 00:22:12.029 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:22:12.286 LIB libspdk_event_scheduler.a 00:22:12.286 LIB libspdk_event_keyring.a 00:22:12.286 LIB libspdk_event_fsdev.a 00:22:12.286 LIB libspdk_event_sock.a 00:22:12.286 LIB libspdk_event_vmd.a 00:22:12.286 SO libspdk_event_scheduler.so.4.0 00:22:12.286 LIB libspdk_event_vhost_blk.a 00:22:12.286 LIB libspdk_event_iobuf.a 00:22:12.286 SO libspdk_event_keyring.so.1.0 00:22:12.286 SO libspdk_event_sock.so.5.0 00:22:12.286 SO libspdk_event_fsdev.so.1.0 00:22:12.286 SO libspdk_event_vmd.so.6.0 00:22:12.286 SO libspdk_event_vhost_blk.so.3.0 00:22:12.286 SO libspdk_event_iobuf.so.3.0 00:22:12.286 SYMLINK libspdk_event_scheduler.so 00:22:12.286 SYMLINK libspdk_event_keyring.so 00:22:12.286 SYMLINK libspdk_event_sock.so 00:22:12.286 SYMLINK libspdk_event_fsdev.so 00:22:12.286 SYMLINK libspdk_event_vmd.so 00:22:12.286 SYMLINK libspdk_event_vhost_blk.so 00:22:12.286 SYMLINK libspdk_event_iobuf.so 00:22:12.543 CC module/event/subsystems/accel/accel.o 00:22:12.543 LIB libspdk_event_accel.a 00:22:12.543 SO libspdk_event_accel.so.6.0 00:22:12.801 SYMLINK libspdk_event_accel.so 00:22:13.061 CC module/event/subsystems/bdev/bdev.o 00:22:13.061 LIB libspdk_event_bdev.a 00:22:13.061 SO libspdk_event_bdev.so.6.0 00:22:13.061 SYMLINK libspdk_event_bdev.so 00:22:13.360 CC module/event/subsystems/nbd/nbd.o 00:22:13.360 CC module/event/subsystems/scsi/scsi.o 00:22:13.360 CC module/event/subsystems/ublk/ublk.o 00:22:13.360 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:22:13.360 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:22:13.360 LIB libspdk_event_nbd.a 00:22:13.620 LIB libspdk_event_scsi.a 00:22:13.620 SO libspdk_event_nbd.so.6.0 00:22:13.620 SO libspdk_event_scsi.so.6.0 00:22:13.620 LIB libspdk_event_ublk.a 00:22:13.620 SYMLINK libspdk_event_nbd.so 00:22:13.620 SO libspdk_event_ublk.so.3.0 00:22:13.620 LIB libspdk_event_nvmf.a 00:22:13.620 SYMLINK libspdk_event_scsi.so 00:22:13.620 SO libspdk_event_nvmf.so.6.0 00:22:13.620 SYMLINK libspdk_event_ublk.so 00:22:13.620 SYMLINK libspdk_event_nvmf.so 00:22:13.620 CC module/event/subsystems/iscsi/iscsi.o 00:22:13.620 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:22:13.879 LIB libspdk_event_vhost_scsi.a 00:22:13.879 LIB libspdk_event_iscsi.a 00:22:13.879 SO libspdk_event_vhost_scsi.so.3.0 00:22:13.879 SO libspdk_event_iscsi.so.6.0 00:22:13.879 SYMLINK libspdk_event_vhost_scsi.so 00:22:13.879 SYMLINK libspdk_event_iscsi.so 00:22:14.137 SO libspdk.so.6.0 00:22:14.137 SYMLINK libspdk.so 00:22:14.395 TEST_HEADER include/spdk/accel.h 00:22:14.395 CC app/trace_record/trace_record.o 00:22:14.395 CXX app/trace/trace.o 00:22:14.395 TEST_HEADER include/spdk/accel_module.h 00:22:14.395 TEST_HEADER include/spdk/assert.h 00:22:14.395 TEST_HEADER include/spdk/barrier.h 00:22:14.395 TEST_HEADER include/spdk/base64.h 00:22:14.395 TEST_HEADER include/spdk/bdev.h 00:22:14.395 TEST_HEADER include/spdk/bdev_module.h 00:22:14.395 TEST_HEADER include/spdk/bdev_zone.h 00:22:14.395 TEST_HEADER include/spdk/bit_array.h 00:22:14.395 CC app/nvmf_tgt/nvmf_main.o 00:22:14.395 TEST_HEADER include/spdk/bit_pool.h 00:22:14.395 TEST_HEADER include/spdk/blob_bdev.h 00:22:14.395 TEST_HEADER include/spdk/blobfs_bdev.h 00:22:14.395 TEST_HEADER include/spdk/blobfs.h 00:22:14.395 CC examples/interrupt_tgt/interrupt_tgt.o 00:22:14.395 TEST_HEADER include/spdk/blob.h 00:22:14.395 TEST_HEADER include/spdk/conf.h 00:22:14.395 TEST_HEADER include/spdk/config.h 00:22:14.395 TEST_HEADER include/spdk/cpuset.h 00:22:14.395 TEST_HEADER include/spdk/crc16.h 00:22:14.395 TEST_HEADER include/spdk/crc32.h 00:22:14.395 TEST_HEADER include/spdk/crc64.h 00:22:14.395 TEST_HEADER include/spdk/dif.h 00:22:14.395 TEST_HEADER include/spdk/dma.h 00:22:14.395 TEST_HEADER include/spdk/endian.h 00:22:14.395 TEST_HEADER include/spdk/env_dpdk.h 00:22:14.395 TEST_HEADER include/spdk/env.h 00:22:14.395 TEST_HEADER include/spdk/event.h 00:22:14.395 TEST_HEADER include/spdk/fd_group.h 00:22:14.395 TEST_HEADER include/spdk/fd.h 00:22:14.395 TEST_HEADER include/spdk/file.h 00:22:14.395 TEST_HEADER include/spdk/fsdev.h 00:22:14.395 TEST_HEADER include/spdk/fsdev_module.h 00:22:14.395 TEST_HEADER include/spdk/ftl.h 00:22:14.395 TEST_HEADER include/spdk/fuse_dispatcher.h 00:22:14.395 TEST_HEADER include/spdk/gpt_spec.h 00:22:14.395 TEST_HEADER include/spdk/hexlify.h 00:22:14.395 CC test/thread/poller_perf/poller_perf.o 00:22:14.395 TEST_HEADER include/spdk/histogram_data.h 00:22:14.395 TEST_HEADER include/spdk/idxd.h 00:22:14.395 TEST_HEADER include/spdk/idxd_spec.h 00:22:14.395 CC examples/util/zipf/zipf.o 00:22:14.395 TEST_HEADER include/spdk/init.h 00:22:14.395 TEST_HEADER include/spdk/ioat.h 00:22:14.395 TEST_HEADER include/spdk/ioat_spec.h 00:22:14.395 TEST_HEADER include/spdk/iscsi_spec.h 00:22:14.395 CC examples/ioat/perf/perf.o 00:22:14.395 TEST_HEADER include/spdk/json.h 00:22:14.395 TEST_HEADER include/spdk/jsonrpc.h 00:22:14.395 TEST_HEADER include/spdk/keyring.h 00:22:14.395 TEST_HEADER include/spdk/keyring_module.h 00:22:14.395 TEST_HEADER include/spdk/likely.h 00:22:14.395 TEST_HEADER include/spdk/log.h 00:22:14.395 TEST_HEADER include/spdk/lvol.h 00:22:14.395 TEST_HEADER include/spdk/md5.h 00:22:14.395 TEST_HEADER include/spdk/memory.h 00:22:14.395 TEST_HEADER include/spdk/mmio.h 00:22:14.395 TEST_HEADER include/spdk/nbd.h 00:22:14.395 TEST_HEADER include/spdk/net.h 00:22:14.395 TEST_HEADER include/spdk/notify.h 00:22:14.395 TEST_HEADER include/spdk/nvme.h 00:22:14.395 TEST_HEADER include/spdk/nvme_intel.h 00:22:14.395 TEST_HEADER include/spdk/nvme_ocssd.h 00:22:14.395 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:22:14.395 CC test/app/bdev_svc/bdev_svc.o 00:22:14.395 TEST_HEADER include/spdk/nvme_spec.h 00:22:14.395 TEST_HEADER include/spdk/nvme_zns.h 00:22:14.395 TEST_HEADER include/spdk/nvmf_cmd.h 00:22:14.395 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:22:14.395 TEST_HEADER include/spdk/nvmf.h 00:22:14.395 TEST_HEADER include/spdk/nvmf_spec.h 00:22:14.395 TEST_HEADER include/spdk/nvmf_transport.h 00:22:14.395 CC test/dma/test_dma/test_dma.o 00:22:14.395 TEST_HEADER include/spdk/opal.h 00:22:14.395 TEST_HEADER include/spdk/opal_spec.h 00:22:14.395 TEST_HEADER include/spdk/pci_ids.h 00:22:14.395 TEST_HEADER include/spdk/pipe.h 00:22:14.395 TEST_HEADER include/spdk/queue.h 00:22:14.395 TEST_HEADER include/spdk/reduce.h 00:22:14.395 TEST_HEADER include/spdk/rpc.h 00:22:14.395 TEST_HEADER include/spdk/scheduler.h 00:22:14.395 TEST_HEADER include/spdk/scsi.h 00:22:14.395 TEST_HEADER include/spdk/scsi_spec.h 00:22:14.395 TEST_HEADER include/spdk/sock.h 00:22:14.395 TEST_HEADER include/spdk/stdinc.h 00:22:14.395 TEST_HEADER include/spdk/string.h 00:22:14.395 TEST_HEADER include/spdk/thread.h 00:22:14.395 TEST_HEADER include/spdk/trace.h 00:22:14.395 TEST_HEADER include/spdk/trace_parser.h 00:22:14.395 TEST_HEADER include/spdk/tree.h 00:22:14.395 TEST_HEADER include/spdk/ublk.h 00:22:14.395 LINK interrupt_tgt 00:22:14.395 TEST_HEADER include/spdk/util.h 00:22:14.395 TEST_HEADER include/spdk/uuid.h 00:22:14.395 TEST_HEADER include/spdk/version.h 00:22:14.395 TEST_HEADER include/spdk/vfio_user_pci.h 00:22:14.395 TEST_HEADER include/spdk/vfio_user_spec.h 00:22:14.395 TEST_HEADER include/spdk/vhost.h 00:22:14.395 TEST_HEADER include/spdk/vmd.h 00:22:14.395 LINK nvmf_tgt 00:22:14.395 TEST_HEADER include/spdk/xor.h 00:22:14.395 TEST_HEADER include/spdk/zipf.h 00:22:14.395 CXX test/cpp_headers/accel.o 00:22:14.395 LINK poller_perf 00:22:14.395 LINK zipf 00:22:14.395 LINK spdk_trace_record 00:22:14.653 LINK bdev_svc 00:22:14.653 CXX test/cpp_headers/accel_module.o 00:22:14.653 CXX test/cpp_headers/assert.o 00:22:14.653 LINK ioat_perf 00:22:14.653 CXX test/cpp_headers/barrier.o 00:22:14.653 CXX test/cpp_headers/base64.o 00:22:14.653 LINK spdk_trace 00:22:14.653 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:22:14.653 CXX test/cpp_headers/bdev.o 00:22:14.932 CC examples/ioat/verify/verify.o 00:22:14.932 CC test/rpc_client/rpc_client_test.o 00:22:14.932 CC test/env/mem_callbacks/mem_callbacks.o 00:22:14.932 CC test/event/event_perf/event_perf.o 00:22:14.932 CC test/event/reactor/reactor.o 00:22:14.932 LINK test_dma 00:22:14.932 CC app/iscsi_tgt/iscsi_tgt.o 00:22:14.932 CC examples/thread/thread/thread_ex.o 00:22:14.932 CXX test/cpp_headers/bdev_module.o 00:22:14.932 LINK reactor 00:22:14.932 LINK rpc_client_test 00:22:14.932 LINK event_perf 00:22:14.932 LINK verify 00:22:15.196 LINK iscsi_tgt 00:22:15.196 CXX test/cpp_headers/bdev_zone.o 00:22:15.196 CC test/env/vtophys/vtophys.o 00:22:15.196 LINK nvme_fuzz 00:22:15.196 CC test/event/reactor_perf/reactor_perf.o 00:22:15.196 LINK thread 00:22:15.196 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:22:15.196 CC app/spdk_lspci/spdk_lspci.o 00:22:15.196 CC app/spdk_tgt/spdk_tgt.o 00:22:15.196 LINK vtophys 00:22:15.196 CXX test/cpp_headers/bit_array.o 00:22:15.196 LINK reactor_perf 00:22:15.196 LINK mem_callbacks 00:22:15.196 CC test/env/memory/memory_ut.o 00:22:15.453 LINK spdk_lspci 00:22:15.453 LINK env_dpdk_post_init 00:22:15.453 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:22:15.453 LINK spdk_tgt 00:22:15.453 CXX test/cpp_headers/bit_pool.o 00:22:15.453 CC app/spdk_nvme_perf/perf.o 00:22:15.453 CC examples/sock/hello_world/hello_sock.o 00:22:15.453 CC app/spdk_nvme_identify/identify.o 00:22:15.453 CC test/event/app_repeat/app_repeat.o 00:22:15.453 CC app/spdk_nvme_discover/discovery_aer.o 00:22:15.453 CC test/env/pci/pci_ut.o 00:22:15.711 CXX test/cpp_headers/blob_bdev.o 00:22:15.711 LINK app_repeat 00:22:15.711 CC examples/vmd/lsvmd/lsvmd.o 00:22:15.711 LINK spdk_nvme_discover 00:22:15.711 CXX test/cpp_headers/blobfs_bdev.o 00:22:15.711 LINK hello_sock 00:22:15.968 LINK lsvmd 00:22:15.968 CC test/event/scheduler/scheduler.o 00:22:15.968 CXX test/cpp_headers/blobfs.o 00:22:15.968 CC examples/vmd/led/led.o 00:22:15.968 LINK pci_ut 00:22:15.968 CC test/app/histogram_perf/histogram_perf.o 00:22:15.968 CXX test/cpp_headers/blob.o 00:22:15.968 CC test/accel/dif/dif.o 00:22:16.226 LINK led 00:22:16.226 LINK scheduler 00:22:16.226 LINK histogram_perf 00:22:16.226 CXX test/cpp_headers/conf.o 00:22:16.226 LINK spdk_nvme_perf 00:22:16.226 CC test/blobfs/mkfs/mkfs.o 00:22:16.226 CXX test/cpp_headers/config.o 00:22:16.510 CC examples/idxd/perf/perf.o 00:22:16.510 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:22:16.510 CXX test/cpp_headers/cpuset.o 00:22:16.510 CC app/spdk_top/spdk_top.o 00:22:16.510 LINK memory_ut 00:22:16.510 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:22:16.510 LINK spdk_nvme_identify 00:22:16.510 LINK mkfs 00:22:16.510 CXX test/cpp_headers/crc16.o 00:22:16.510 CC test/app/jsoncat/jsoncat.o 00:22:16.510 CXX test/cpp_headers/crc32.o 00:22:16.510 CXX test/cpp_headers/crc64.o 00:22:16.768 LINK jsoncat 00:22:16.768 LINK idxd_perf 00:22:16.768 CXX test/cpp_headers/dif.o 00:22:16.768 CC app/vhost/vhost.o 00:22:16.768 LINK dif 00:22:16.768 LINK vhost_fuzz 00:22:16.768 CXX test/cpp_headers/dma.o 00:22:16.768 CC examples/fsdev/hello_world/hello_fsdev.o 00:22:17.026 CC test/lvol/esnap/esnap.o 00:22:17.026 CC app/spdk_dd/spdk_dd.o 00:22:17.026 CC examples/accel/perf/accel_perf.o 00:22:17.026 LINK vhost 00:22:17.026 CXX test/cpp_headers/endian.o 00:22:17.026 LINK iscsi_fuzz 00:22:17.026 CXX test/cpp_headers/env_dpdk.o 00:22:17.026 CC app/fio/nvme/fio_plugin.o 00:22:17.026 CC examples/blob/hello_world/hello_blob.o 00:22:17.026 LINK hello_fsdev 00:22:17.285 CXX test/cpp_headers/env.o 00:22:17.285 CC examples/blob/cli/blobcli.o 00:22:17.285 LINK spdk_dd 00:22:17.285 LINK spdk_top 00:22:17.285 CC test/app/stub/stub.o 00:22:17.285 LINK accel_perf 00:22:17.285 LINK hello_blob 00:22:17.285 CXX test/cpp_headers/event.o 00:22:17.542 CC examples/nvme/hello_world/hello_world.o 00:22:17.542 LINK stub 00:22:17.542 CC examples/nvme/reconnect/reconnect.o 00:22:17.542 CC examples/nvme/nvme_manage/nvme_manage.o 00:22:17.542 CC examples/nvme/arbitration/arbitration.o 00:22:17.542 CXX test/cpp_headers/fd_group.o 00:22:17.542 CC examples/bdev/hello_world/hello_bdev.o 00:22:17.542 CXX test/cpp_headers/fd.o 00:22:17.542 LINK spdk_nvme 00:22:17.542 LINK hello_world 00:22:17.801 CC examples/bdev/bdevperf/bdevperf.o 00:22:17.801 LINK blobcli 00:22:17.801 CXX test/cpp_headers/file.o 00:22:17.801 LINK hello_bdev 00:22:17.801 LINK reconnect 00:22:17.801 LINK arbitration 00:22:17.801 CC app/fio/bdev/fio_plugin.o 00:22:17.802 CC examples/nvme/hotplug/hotplug.o 00:22:17.802 CXX test/cpp_headers/fsdev.o 00:22:18.061 CC test/nvme/aer/aer.o 00:22:18.061 CC test/nvme/reset/reset.o 00:22:18.061 CC test/nvme/sgl/sgl.o 00:22:18.061 CXX test/cpp_headers/fsdev_module.o 00:22:18.061 CC examples/nvme/cmb_copy/cmb_copy.o 00:22:18.061 LINK nvme_manage 00:22:18.318 CXX test/cpp_headers/ftl.o 00:22:18.318 LINK cmb_copy 00:22:18.318 LINK hotplug 00:22:18.318 LINK spdk_bdev 00:22:18.318 LINK reset 00:22:18.318 LINK sgl 00:22:18.318 LINK aer 00:22:18.318 CXX test/cpp_headers/fuse_dispatcher.o 00:22:18.318 CXX test/cpp_headers/gpt_spec.o 00:22:18.318 CC examples/nvme/abort/abort.o 00:22:18.318 CC test/nvme/e2edp/nvme_dp.o 00:22:18.318 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:22:18.576 CC test/nvme/overhead/overhead.o 00:22:18.576 CC test/nvme/err_injection/err_injection.o 00:22:18.576 CC test/nvme/startup/startup.o 00:22:18.576 CXX test/cpp_headers/hexlify.o 00:22:18.576 LINK pmr_persistence 00:22:18.576 LINK bdevperf 00:22:18.576 CC test/bdev/bdevio/bdevio.o 00:22:18.576 CXX test/cpp_headers/histogram_data.o 00:22:18.576 LINK err_injection 00:22:18.576 LINK startup 00:22:18.834 LINK nvme_dp 00:22:18.834 CXX test/cpp_headers/idxd.o 00:22:18.834 LINK overhead 00:22:18.834 CXX test/cpp_headers/idxd_spec.o 00:22:18.834 LINK abort 00:22:18.834 CXX test/cpp_headers/init.o 00:22:18.834 CXX test/cpp_headers/ioat.o 00:22:18.834 CC test/nvme/reserve/reserve.o 00:22:18.834 CXX test/cpp_headers/ioat_spec.o 00:22:18.834 CC test/nvme/simple_copy/simple_copy.o 00:22:18.834 CC test/nvme/connect_stress/connect_stress.o 00:22:18.834 CC test/nvme/boot_partition/boot_partition.o 00:22:18.834 CXX test/cpp_headers/iscsi_spec.o 00:22:19.091 CXX test/cpp_headers/json.o 00:22:19.091 CXX test/cpp_headers/jsonrpc.o 00:22:19.091 LINK bdevio 00:22:19.091 CC examples/nvmf/nvmf/nvmf.o 00:22:19.091 LINK connect_stress 00:22:19.091 LINK reserve 00:22:19.091 LINK boot_partition 00:22:19.091 LINK simple_copy 00:22:19.091 CC test/nvme/compliance/nvme_compliance.o 00:22:19.091 CXX test/cpp_headers/keyring.o 00:22:19.091 CC test/nvme/fused_ordering/fused_ordering.o 00:22:19.349 CXX test/cpp_headers/keyring_module.o 00:22:19.349 CXX test/cpp_headers/likely.o 00:22:19.349 CC test/nvme/doorbell_aers/doorbell_aers.o 00:22:19.349 CC test/nvme/fdp/fdp.o 00:22:19.349 CC test/nvme/cuse/cuse.o 00:22:19.349 LINK nvmf 00:22:19.349 CXX test/cpp_headers/log.o 00:22:19.349 LINK fused_ordering 00:22:19.349 CXX test/cpp_headers/lvol.o 00:22:19.349 CXX test/cpp_headers/md5.o 00:22:19.349 CXX test/cpp_headers/memory.o 00:22:19.349 CXX test/cpp_headers/mmio.o 00:22:19.349 LINK doorbell_aers 00:22:19.349 CXX test/cpp_headers/nbd.o 00:22:19.607 CXX test/cpp_headers/net.o 00:22:19.607 CXX test/cpp_headers/notify.o 00:22:19.607 LINK nvme_compliance 00:22:19.607 CXX test/cpp_headers/nvme.o 00:22:19.607 CXX test/cpp_headers/nvme_intel.o 00:22:19.607 CXX test/cpp_headers/nvme_ocssd.o 00:22:19.607 CXX test/cpp_headers/nvme_ocssd_spec.o 00:22:19.607 CXX test/cpp_headers/nvme_spec.o 00:22:19.607 LINK fdp 00:22:19.607 CXX test/cpp_headers/nvme_zns.o 00:22:19.607 CXX test/cpp_headers/nvmf_cmd.o 00:22:19.607 CXX test/cpp_headers/nvmf_fc_spec.o 00:22:19.607 CXX test/cpp_headers/nvmf.o 00:22:19.864 CXX test/cpp_headers/nvmf_spec.o 00:22:19.864 CXX test/cpp_headers/nvmf_transport.o 00:22:19.864 CXX test/cpp_headers/opal.o 00:22:19.864 CXX test/cpp_headers/opal_spec.o 00:22:19.864 CXX test/cpp_headers/pci_ids.o 00:22:19.864 CXX test/cpp_headers/pipe.o 00:22:19.864 CXX test/cpp_headers/queue.o 00:22:19.864 CXX test/cpp_headers/reduce.o 00:22:19.864 CXX test/cpp_headers/rpc.o 00:22:19.864 CXX test/cpp_headers/scheduler.o 00:22:19.864 CXX test/cpp_headers/scsi.o 00:22:19.864 CXX test/cpp_headers/scsi_spec.o 00:22:19.864 CXX test/cpp_headers/sock.o 00:22:19.864 CXX test/cpp_headers/stdinc.o 00:22:19.864 CXX test/cpp_headers/string.o 00:22:19.864 CXX test/cpp_headers/thread.o 00:22:20.121 CXX test/cpp_headers/trace.o 00:22:20.121 CXX test/cpp_headers/trace_parser.o 00:22:20.121 CXX test/cpp_headers/tree.o 00:22:20.121 CXX test/cpp_headers/ublk.o 00:22:20.121 CXX test/cpp_headers/util.o 00:22:20.121 CXX test/cpp_headers/uuid.o 00:22:20.121 CXX test/cpp_headers/version.o 00:22:20.121 CXX test/cpp_headers/vfio_user_pci.o 00:22:20.121 CXX test/cpp_headers/vfio_user_spec.o 00:22:20.121 CXX test/cpp_headers/vhost.o 00:22:20.121 CXX test/cpp_headers/vmd.o 00:22:20.121 CXX test/cpp_headers/xor.o 00:22:20.121 CXX test/cpp_headers/zipf.o 00:22:20.689 LINK cuse 00:22:22.062 LINK esnap 00:22:22.321 00:22:22.321 real 1m5.925s 00:22:22.321 user 6m14.528s 00:22:22.321 sys 1m5.854s 00:22:22.321 ************************************ 00:22:22.321 END TEST make 00:22:22.321 ************************************ 00:22:22.321 15:51:54 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:22:22.321 15:51:54 make -- common/autotest_common.sh@10 -- $ set +x 00:22:22.321 15:51:54 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:22:22.321 15:51:54 -- pm/common@29 -- $ signal_monitor_resources TERM 00:22:22.321 15:51:54 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:22:22.321 15:51:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:22.321 15:51:54 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:22:22.321 15:51:54 -- pm/common@44 -- $ pid=5043 00:22:22.321 15:51:54 -- pm/common@50 -- $ kill -TERM 5043 00:22:22.321 15:51:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:22.321 15:51:54 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:22:22.321 15:51:54 -- pm/common@44 -- $ pid=5044 00:22:22.321 15:51:54 -- pm/common@50 -- $ kill -TERM 5044 00:22:22.321 15:51:54 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:22:22.321 15:51:54 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:22:22.321 15:51:54 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:22.321 15:51:54 -- common/autotest_common.sh@1691 -- # lcov --version 00:22:22.321 15:51:54 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:22.321 15:51:54 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:22.321 15:51:54 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:22.321 15:51:54 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:22.321 15:51:54 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:22.321 15:51:54 -- scripts/common.sh@336 -- # IFS=.-: 00:22:22.321 15:51:54 -- scripts/common.sh@336 -- # read -ra ver1 00:22:22.321 15:51:54 -- scripts/common.sh@337 -- # IFS=.-: 00:22:22.321 15:51:54 -- scripts/common.sh@337 -- # read -ra ver2 00:22:22.321 15:51:54 -- scripts/common.sh@338 -- # local 'op=<' 00:22:22.321 15:51:54 -- scripts/common.sh@340 -- # ver1_l=2 00:22:22.321 15:51:54 -- scripts/common.sh@341 -- # ver2_l=1 00:22:22.321 15:51:54 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:22.321 15:51:54 -- scripts/common.sh@344 -- # case "$op" in 00:22:22.321 15:51:54 -- scripts/common.sh@345 -- # : 1 00:22:22.321 15:51:54 -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:22.321 15:51:54 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:22.321 15:51:54 -- scripts/common.sh@365 -- # decimal 1 00:22:22.321 15:51:54 -- scripts/common.sh@353 -- # local d=1 00:22:22.321 15:51:54 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:22.321 15:51:54 -- scripts/common.sh@355 -- # echo 1 00:22:22.321 15:51:54 -- scripts/common.sh@365 -- # ver1[v]=1 00:22:22.321 15:51:54 -- scripts/common.sh@366 -- # decimal 2 00:22:22.321 15:51:54 -- scripts/common.sh@353 -- # local d=2 00:22:22.321 15:51:54 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:22.321 15:51:54 -- scripts/common.sh@355 -- # echo 2 00:22:22.321 15:51:54 -- scripts/common.sh@366 -- # ver2[v]=2 00:22:22.321 15:51:54 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:22.321 15:51:54 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:22.321 15:51:54 -- scripts/common.sh@368 -- # return 0 00:22:22.321 15:51:54 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:22.322 15:51:54 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:22.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.322 --rc genhtml_branch_coverage=1 00:22:22.322 --rc genhtml_function_coverage=1 00:22:22.322 --rc genhtml_legend=1 00:22:22.322 --rc geninfo_all_blocks=1 00:22:22.322 --rc geninfo_unexecuted_blocks=1 00:22:22.322 00:22:22.322 ' 00:22:22.322 15:51:54 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:22.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.322 --rc genhtml_branch_coverage=1 00:22:22.322 --rc genhtml_function_coverage=1 00:22:22.322 --rc genhtml_legend=1 00:22:22.322 --rc geninfo_all_blocks=1 00:22:22.322 --rc geninfo_unexecuted_blocks=1 00:22:22.322 00:22:22.322 ' 00:22:22.322 15:51:54 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:22.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.322 --rc genhtml_branch_coverage=1 00:22:22.322 --rc genhtml_function_coverage=1 00:22:22.322 --rc genhtml_legend=1 00:22:22.322 --rc geninfo_all_blocks=1 00:22:22.322 --rc geninfo_unexecuted_blocks=1 00:22:22.322 00:22:22.322 ' 00:22:22.322 15:51:54 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:22.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.322 --rc genhtml_branch_coverage=1 00:22:22.322 --rc genhtml_function_coverage=1 00:22:22.322 --rc genhtml_legend=1 00:22:22.322 --rc geninfo_all_blocks=1 00:22:22.322 --rc geninfo_unexecuted_blocks=1 00:22:22.322 00:22:22.322 ' 00:22:22.322 15:51:54 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:22.322 15:51:54 -- nvmf/common.sh@7 -- # uname -s 00:22:22.322 15:51:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:22.322 15:51:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:22.322 15:51:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:22.322 15:51:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:22.322 15:51:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:22.322 15:51:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:22.322 15:51:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:22.322 15:51:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:22.322 15:51:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:22.322 15:51:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:22.322 15:51:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:24555432-5c09-44b1-a72b-d75c56d455b0 00:22:22.322 15:51:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=24555432-5c09-44b1-a72b-d75c56d455b0 00:22:22.322 15:51:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:22.322 15:51:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:22.322 15:51:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:22.322 15:51:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:22.322 15:51:54 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:22.322 15:51:54 -- scripts/common.sh@15 -- # shopt -s extglob 00:22:22.322 15:51:54 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:22.322 15:51:54 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:22.322 15:51:54 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:22.322 15:51:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.322 15:51:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.322 15:51:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.322 15:51:54 -- paths/export.sh@5 -- # export PATH 00:22:22.322 15:51:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.322 15:51:54 -- nvmf/common.sh@51 -- # : 0 00:22:22.322 15:51:54 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:22.322 15:51:54 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:22.322 15:51:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:22.322 15:51:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:22.322 15:51:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:22.322 15:51:54 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:22.322 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:22.322 15:51:54 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:22.322 15:51:54 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:22.322 15:51:54 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:22.322 15:51:54 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:22:22.580 15:51:54 -- spdk/autotest.sh@32 -- # uname -s 00:22:22.580 15:51:54 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:22:22.580 15:51:54 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:22:22.580 15:51:54 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:22:22.580 15:51:54 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:22:22.580 15:51:54 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:22:22.580 15:51:54 -- spdk/autotest.sh@44 -- # modprobe nbd 00:22:22.580 15:51:54 -- spdk/autotest.sh@46 -- # type -P udevadm 00:22:22.580 15:51:54 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:22:22.580 15:51:54 -- spdk/autotest.sh@48 -- # udevadm_pid=53699 00:22:22.580 15:51:54 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:22:22.580 15:51:54 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:22:22.580 15:51:54 -- pm/common@17 -- # local monitor 00:22:22.580 15:51:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:22:22.580 15:51:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:22:22.580 15:51:54 -- pm/common@25 -- # sleep 1 00:22:22.580 15:51:54 -- pm/common@21 -- # date +%s 00:22:22.580 15:51:54 -- pm/common@21 -- # date +%s 00:22:22.580 15:51:54 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730821914 00:22:22.580 15:51:54 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730821914 00:22:22.580 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730821914_collect-cpu-load.pm.log 00:22:22.580 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730821914_collect-vmstat.pm.log 00:22:23.514 15:51:55 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:22:23.514 15:51:55 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:22:23.514 15:51:55 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:23.514 15:51:55 -- common/autotest_common.sh@10 -- # set +x 00:22:23.514 15:51:55 -- spdk/autotest.sh@59 -- # create_test_list 00:22:23.514 15:51:55 -- common/autotest_common.sh@750 -- # xtrace_disable 00:22:23.514 15:51:55 -- common/autotest_common.sh@10 -- # set +x 00:22:23.514 15:51:55 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:22:23.514 15:51:55 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:22:23.514 15:51:55 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:22:23.514 15:51:55 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:22:23.514 15:51:55 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:22:23.514 15:51:55 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:22:23.514 15:51:55 -- common/autotest_common.sh@1455 -- # uname 00:22:23.514 15:51:55 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:22:23.514 15:51:55 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:22:23.514 15:51:55 -- common/autotest_common.sh@1475 -- # uname 00:22:23.514 15:51:55 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:22:23.514 15:51:55 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:22:23.514 15:51:55 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:22:23.514 lcov: LCOV version 1.15 00:22:23.514 15:51:55 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:22:38.396 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:22:38.396 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:22:53.305 15:52:23 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:22:53.305 15:52:23 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:53.305 15:52:23 -- common/autotest_common.sh@10 -- # set +x 00:22:53.305 15:52:23 -- spdk/autotest.sh@78 -- # rm -f 00:22:53.305 15:52:23 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:53.305 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:53.305 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:22:53.305 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:22:53.305 15:52:24 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:22:53.305 15:52:24 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:22:53.305 15:52:24 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:22:53.305 15:52:24 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:22:53.305 15:52:24 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:22:53.305 15:52:24 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:22:53.305 15:52:24 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:22:53.305 15:52:24 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:53.305 15:52:24 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:53.305 15:52:24 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:22:53.305 15:52:24 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n2 00:22:53.305 15:52:24 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:22:53.305 15:52:24 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:22:53.305 15:52:24 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:53.305 15:52:24 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:22:53.305 15:52:24 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n3 00:22:53.305 15:52:24 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:22:53.305 15:52:24 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:22:53.305 15:52:24 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:53.305 15:52:24 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:22:53.305 15:52:24 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:22:53.305 15:52:24 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:22:53.305 15:52:24 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:22:53.305 15:52:24 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:53.305 15:52:24 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:22:53.305 15:52:24 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:22:53.305 15:52:24 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:22:53.305 15:52:24 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:22:53.305 15:52:24 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:22:53.305 15:52:24 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:22:53.305 No valid GPT data, bailing 00:22:53.305 15:52:24 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:53.305 15:52:24 -- scripts/common.sh@394 -- # pt= 00:22:53.305 15:52:24 -- scripts/common.sh@395 -- # return 1 00:22:53.305 15:52:24 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:22:53.305 1+0 records in 00:22:53.305 1+0 records out 00:22:53.305 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00380521 s, 276 MB/s 00:22:53.305 15:52:24 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:22:53.305 15:52:24 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:22:53.305 15:52:24 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n2 00:22:53.305 15:52:24 -- scripts/common.sh@381 -- # local block=/dev/nvme0n2 pt 00:22:53.305 15:52:24 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n2 00:22:53.305 No valid GPT data, bailing 00:22:53.305 15:52:24 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:22:53.305 15:52:24 -- scripts/common.sh@394 -- # pt= 00:22:53.305 15:52:24 -- scripts/common.sh@395 -- # return 1 00:22:53.305 15:52:24 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n2 bs=1M count=1 00:22:53.305 1+0 records in 00:22:53.305 1+0 records out 00:22:53.305 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00441014 s, 238 MB/s 00:22:53.305 15:52:24 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:22:53.305 15:52:24 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:22:53.305 15:52:24 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n3 00:22:53.305 15:52:24 -- scripts/common.sh@381 -- # local block=/dev/nvme0n3 pt 00:22:53.305 15:52:24 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n3 00:22:53.305 No valid GPT data, bailing 00:22:53.305 15:52:24 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:22:53.305 15:52:24 -- scripts/common.sh@394 -- # pt= 00:22:53.305 15:52:24 -- scripts/common.sh@395 -- # return 1 00:22:53.305 15:52:24 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n3 bs=1M count=1 00:22:53.305 1+0 records in 00:22:53.305 1+0 records out 00:22:53.305 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00359802 s, 291 MB/s 00:22:53.305 15:52:24 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:22:53.305 15:52:24 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:22:53.305 15:52:24 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:22:53.305 15:52:24 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:22:53.305 15:52:24 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:22:53.305 No valid GPT data, bailing 00:22:53.305 15:52:24 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:22:53.305 15:52:24 -- scripts/common.sh@394 -- # pt= 00:22:53.305 15:52:24 -- scripts/common.sh@395 -- # return 1 00:22:53.305 15:52:24 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:22:53.305 1+0 records in 00:22:53.305 1+0 records out 00:22:53.305 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00369231 s, 284 MB/s 00:22:53.305 15:52:24 -- spdk/autotest.sh@105 -- # sync 00:22:53.305 15:52:24 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:22:53.305 15:52:24 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:22:53.305 15:52:24 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:22:53.871 15:52:26 -- spdk/autotest.sh@111 -- # uname -s 00:22:53.871 15:52:26 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:22:53.871 15:52:26 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:22:53.871 15:52:26 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:22:54.437 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:54.437 Hugepages 00:22:54.437 node hugesize free / total 00:22:54.437 node0 1048576kB 0 / 0 00:22:54.437 node0 2048kB 0 / 0 00:22:54.437 00:22:54.437 Type BDF Vendor Device NUMA Driver Device Block devices 00:22:54.437 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:22:54.437 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:22:54.694 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:22:54.694 15:52:26 -- spdk/autotest.sh@117 -- # uname -s 00:22:54.694 15:52:26 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:22:54.694 15:52:26 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:22:54.694 15:52:26 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:54.952 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:55.885 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:56.855 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:56.855 15:52:29 -- common/autotest_common.sh@1515 -- # sleep 1 00:22:57.787 15:52:30 -- common/autotest_common.sh@1516 -- # bdfs=() 00:22:57.787 15:52:30 -- common/autotest_common.sh@1516 -- # local bdfs 00:22:57.787 15:52:30 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:22:57.787 15:52:30 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:22:57.787 15:52:30 -- common/autotest_common.sh@1496 -- # bdfs=() 00:22:57.787 15:52:30 -- common/autotest_common.sh@1496 -- # local bdfs 00:22:57.787 15:52:30 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:22:57.787 15:52:30 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:57.787 15:52:30 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:22:57.787 15:52:30 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:22:57.787 15:52:30 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:22:57.787 15:52:30 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:58.045 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:58.045 Waiting for block devices as requested 00:22:58.045 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:58.303 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:58.303 15:52:30 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:22:58.303 15:52:30 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:22:58.303 15:52:30 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:22:58.303 15:52:30 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:22:58.303 15:52:30 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:22:58.303 15:52:30 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:22:58.303 15:52:30 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:22:58.303 15:52:30 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:22:58.303 15:52:30 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:22:58.303 15:52:30 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:22:58.303 15:52:30 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:22:58.303 15:52:30 -- common/autotest_common.sh@1529 -- # grep oacs 00:22:58.303 15:52:30 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:22:58.303 15:52:30 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:22:58.303 15:52:30 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:22:58.303 15:52:30 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:22:58.303 15:52:30 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:22:58.303 15:52:30 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:22:58.303 15:52:30 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:22:58.303 15:52:30 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:22:58.303 15:52:30 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:22:58.303 15:52:30 -- common/autotest_common.sh@1541 -- # continue 00:22:58.303 15:52:30 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:22:58.303 15:52:30 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:22:58.303 15:52:30 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:22:58.303 15:52:30 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:22:58.303 15:52:30 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:22:58.303 15:52:30 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:22:58.303 15:52:30 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:22:58.303 15:52:30 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:22:58.303 15:52:30 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:22:58.303 15:52:30 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:22:58.303 15:52:30 -- common/autotest_common.sh@1529 -- # grep oacs 00:22:58.303 15:52:30 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:22:58.303 15:52:30 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:22:58.303 15:52:30 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:22:58.303 15:52:30 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:22:58.303 15:52:30 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:22:58.303 15:52:30 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:22:58.303 15:52:30 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:22:58.303 15:52:30 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:22:58.303 15:52:30 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:22:58.303 15:52:30 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:22:58.303 15:52:30 -- common/autotest_common.sh@1541 -- # continue 00:22:58.303 15:52:30 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:22:58.303 15:52:30 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:58.303 15:52:30 -- common/autotest_common.sh@10 -- # set +x 00:22:58.303 15:52:30 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:22:58.303 15:52:30 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:58.303 15:52:30 -- common/autotest_common.sh@10 -- # set +x 00:22:58.303 15:52:30 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:58.869 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:58.869 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:58.869 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:59.127 15:52:31 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:22:59.127 15:52:31 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:59.127 15:52:31 -- common/autotest_common.sh@10 -- # set +x 00:22:59.127 15:52:31 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:22:59.127 15:52:31 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:22:59.127 15:52:31 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:22:59.127 15:52:31 -- common/autotest_common.sh@1561 -- # bdfs=() 00:22:59.127 15:52:31 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:22:59.127 15:52:31 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:22:59.127 15:52:31 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:22:59.127 15:52:31 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:22:59.127 15:52:31 -- common/autotest_common.sh@1496 -- # bdfs=() 00:22:59.127 15:52:31 -- common/autotest_common.sh@1496 -- # local bdfs 00:22:59.127 15:52:31 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:22:59.127 15:52:31 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:59.127 15:52:31 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:22:59.127 15:52:31 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:22:59.127 15:52:31 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:22:59.127 15:52:31 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:22:59.127 15:52:31 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:22:59.127 15:52:31 -- common/autotest_common.sh@1564 -- # device=0x0010 00:22:59.127 15:52:31 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:22:59.127 15:52:31 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:22:59.127 15:52:31 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:22:59.127 15:52:31 -- common/autotest_common.sh@1564 -- # device=0x0010 00:22:59.127 15:52:31 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:22:59.127 15:52:31 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:22:59.127 15:52:31 -- common/autotest_common.sh@1570 -- # return 0 00:22:59.127 15:52:31 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:22:59.127 15:52:31 -- common/autotest_common.sh@1578 -- # return 0 00:22:59.127 15:52:31 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:22:59.127 15:52:31 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:22:59.127 15:52:31 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:22:59.127 15:52:31 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:22:59.127 15:52:31 -- spdk/autotest.sh@149 -- # timing_enter lib 00:22:59.127 15:52:31 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:59.127 15:52:31 -- common/autotest_common.sh@10 -- # set +x 00:22:59.127 15:52:31 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:22:59.127 15:52:31 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:22:59.127 15:52:31 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:59.127 15:52:31 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:59.127 15:52:31 -- common/autotest_common.sh@10 -- # set +x 00:22:59.127 ************************************ 00:22:59.127 START TEST env 00:22:59.127 ************************************ 00:22:59.127 15:52:31 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:22:59.127 * Looking for test storage... 00:22:59.127 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:22:59.127 15:52:31 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:59.127 15:52:31 env -- common/autotest_common.sh@1691 -- # lcov --version 00:22:59.127 15:52:31 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:59.385 15:52:31 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:59.385 15:52:31 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:59.385 15:52:31 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:59.385 15:52:31 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:59.385 15:52:31 env -- scripts/common.sh@336 -- # IFS=.-: 00:22:59.385 15:52:31 env -- scripts/common.sh@336 -- # read -ra ver1 00:22:59.385 15:52:31 env -- scripts/common.sh@337 -- # IFS=.-: 00:22:59.385 15:52:31 env -- scripts/common.sh@337 -- # read -ra ver2 00:22:59.385 15:52:31 env -- scripts/common.sh@338 -- # local 'op=<' 00:22:59.385 15:52:31 env -- scripts/common.sh@340 -- # ver1_l=2 00:22:59.385 15:52:31 env -- scripts/common.sh@341 -- # ver2_l=1 00:22:59.385 15:52:31 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:59.385 15:52:31 env -- scripts/common.sh@344 -- # case "$op" in 00:22:59.385 15:52:31 env -- scripts/common.sh@345 -- # : 1 00:22:59.385 15:52:31 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:59.385 15:52:31 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:59.385 15:52:31 env -- scripts/common.sh@365 -- # decimal 1 00:22:59.385 15:52:31 env -- scripts/common.sh@353 -- # local d=1 00:22:59.385 15:52:31 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:59.385 15:52:31 env -- scripts/common.sh@355 -- # echo 1 00:22:59.385 15:52:31 env -- scripts/common.sh@365 -- # ver1[v]=1 00:22:59.385 15:52:31 env -- scripts/common.sh@366 -- # decimal 2 00:22:59.385 15:52:31 env -- scripts/common.sh@353 -- # local d=2 00:22:59.385 15:52:31 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:59.385 15:52:31 env -- scripts/common.sh@355 -- # echo 2 00:22:59.385 15:52:31 env -- scripts/common.sh@366 -- # ver2[v]=2 00:22:59.385 15:52:31 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:59.385 15:52:31 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:59.385 15:52:31 env -- scripts/common.sh@368 -- # return 0 00:22:59.385 15:52:31 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:59.385 15:52:31 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:59.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.385 --rc genhtml_branch_coverage=1 00:22:59.385 --rc genhtml_function_coverage=1 00:22:59.385 --rc genhtml_legend=1 00:22:59.385 --rc geninfo_all_blocks=1 00:22:59.385 --rc geninfo_unexecuted_blocks=1 00:22:59.385 00:22:59.385 ' 00:22:59.385 15:52:31 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:59.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.385 --rc genhtml_branch_coverage=1 00:22:59.385 --rc genhtml_function_coverage=1 00:22:59.385 --rc genhtml_legend=1 00:22:59.385 --rc geninfo_all_blocks=1 00:22:59.385 --rc geninfo_unexecuted_blocks=1 00:22:59.385 00:22:59.385 ' 00:22:59.385 15:52:31 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:59.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.385 --rc genhtml_branch_coverage=1 00:22:59.385 --rc genhtml_function_coverage=1 00:22:59.385 --rc genhtml_legend=1 00:22:59.385 --rc geninfo_all_blocks=1 00:22:59.385 --rc geninfo_unexecuted_blocks=1 00:22:59.385 00:22:59.385 ' 00:22:59.385 15:52:31 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:59.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.385 --rc genhtml_branch_coverage=1 00:22:59.385 --rc genhtml_function_coverage=1 00:22:59.385 --rc genhtml_legend=1 00:22:59.385 --rc geninfo_all_blocks=1 00:22:59.385 --rc geninfo_unexecuted_blocks=1 00:22:59.385 00:22:59.385 ' 00:22:59.385 15:52:31 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:22:59.385 15:52:31 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:59.385 15:52:31 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:59.385 15:52:31 env -- common/autotest_common.sh@10 -- # set +x 00:22:59.385 ************************************ 00:22:59.385 START TEST env_memory 00:22:59.385 ************************************ 00:22:59.385 15:52:31 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:22:59.385 00:22:59.385 00:22:59.385 CUnit - A unit testing framework for C - Version 2.1-3 00:22:59.385 http://cunit.sourceforge.net/ 00:22:59.385 00:22:59.385 00:22:59.385 Suite: memory 00:22:59.385 Test: alloc and free memory map ...[2024-11-05 15:52:31.624962] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:22:59.385 passed 00:22:59.385 Test: mem map translation ...[2024-11-05 15:52:31.663575] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:22:59.385 [2024-11-05 15:52:31.663635] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:22:59.385 [2024-11-05 15:52:31.663695] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:22:59.385 [2024-11-05 15:52:31.663710] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:22:59.385 passed 00:22:59.385 Test: mem map registration ...[2024-11-05 15:52:31.731776] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:22:59.385 [2024-11-05 15:52:31.731833] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:22:59.385 passed 00:22:59.643 Test: mem map adjacent registrations ...passed 00:22:59.643 00:22:59.643 Run Summary: Type Total Ran Passed Failed Inactive 00:22:59.643 suites 1 1 n/a 0 0 00:22:59.643 tests 4 4 4 0 0 00:22:59.643 asserts 152 152 152 0 n/a 00:22:59.643 00:22:59.643 Elapsed time = 0.233 seconds 00:22:59.643 00:22:59.644 real 0m0.264s 00:22:59.644 user 0m0.239s 00:22:59.644 sys 0m0.019s 00:22:59.644 15:52:31 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:59.644 15:52:31 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:22:59.644 ************************************ 00:22:59.644 END TEST env_memory 00:22:59.644 ************************************ 00:22:59.644 15:52:31 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:22:59.644 15:52:31 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:59.644 15:52:31 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:59.644 15:52:31 env -- common/autotest_common.sh@10 -- # set +x 00:22:59.644 ************************************ 00:22:59.644 START TEST env_vtophys 00:22:59.644 ************************************ 00:22:59.644 15:52:31 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:22:59.644 EAL: lib.eal log level changed from notice to debug 00:22:59.644 EAL: Detected lcore 0 as core 0 on socket 0 00:22:59.644 EAL: Detected lcore 1 as core 0 on socket 0 00:22:59.644 EAL: Detected lcore 2 as core 0 on socket 0 00:22:59.644 EAL: Detected lcore 3 as core 0 on socket 0 00:22:59.644 EAL: Detected lcore 4 as core 0 on socket 0 00:22:59.644 EAL: Detected lcore 5 as core 0 on socket 0 00:22:59.644 EAL: Detected lcore 6 as core 0 on socket 0 00:22:59.644 EAL: Detected lcore 7 as core 0 on socket 0 00:22:59.644 EAL: Detected lcore 8 as core 0 on socket 0 00:22:59.644 EAL: Detected lcore 9 as core 0 on socket 0 00:22:59.644 EAL: Maximum logical cores by configuration: 128 00:22:59.644 EAL: Detected CPU lcores: 10 00:22:59.644 EAL: Detected NUMA nodes: 1 00:22:59.644 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:22:59.644 EAL: Detected shared linkage of DPDK 00:22:59.644 EAL: No shared files mode enabled, IPC will be disabled 00:22:59.644 EAL: Selected IOVA mode 'PA' 00:22:59.644 EAL: Probing VFIO support... 00:22:59.644 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:22:59.644 EAL: VFIO modules not loaded, skipping VFIO support... 00:22:59.644 EAL: Ask a virtual area of 0x2e000 bytes 00:22:59.644 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:22:59.644 EAL: Setting up physically contiguous memory... 00:22:59.644 EAL: Setting maximum number of open files to 524288 00:22:59.644 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:22:59.644 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:22:59.644 EAL: Ask a virtual area of 0x61000 bytes 00:22:59.644 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:22:59.644 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:22:59.644 EAL: Ask a virtual area of 0x400000000 bytes 00:22:59.644 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:22:59.644 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:22:59.644 EAL: Ask a virtual area of 0x61000 bytes 00:22:59.644 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:22:59.644 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:22:59.644 EAL: Ask a virtual area of 0x400000000 bytes 00:22:59.644 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:22:59.644 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:22:59.644 EAL: Ask a virtual area of 0x61000 bytes 00:22:59.644 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:22:59.644 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:22:59.644 EAL: Ask a virtual area of 0x400000000 bytes 00:22:59.644 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:22:59.644 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:22:59.644 EAL: Ask a virtual area of 0x61000 bytes 00:22:59.644 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:22:59.644 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:22:59.644 EAL: Ask a virtual area of 0x400000000 bytes 00:22:59.644 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:22:59.644 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:22:59.644 EAL: Hugepages will be freed exactly as allocated. 00:22:59.644 EAL: No shared files mode enabled, IPC is disabled 00:22:59.644 EAL: No shared files mode enabled, IPC is disabled 00:22:59.644 EAL: TSC frequency is ~2600000 KHz 00:22:59.644 EAL: Main lcore 0 is ready (tid=7fdf236efa40;cpuset=[0]) 00:22:59.644 EAL: Trying to obtain current memory policy. 00:22:59.644 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:59.644 EAL: Restoring previous memory policy: 0 00:22:59.644 EAL: request: mp_malloc_sync 00:22:59.644 EAL: No shared files mode enabled, IPC is disabled 00:22:59.644 EAL: Heap on socket 0 was expanded by 2MB 00:22:59.644 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:22:59.644 EAL: No PCI address specified using 'addr=' in: bus=pci 00:22:59.644 EAL: Mem event callback 'spdk:(nil)' registered 00:22:59.644 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:22:59.902 00:22:59.902 00:22:59.902 CUnit - A unit testing framework for C - Version 2.1-3 00:22:59.902 http://cunit.sourceforge.net/ 00:22:59.902 00:22:59.902 00:22:59.902 Suite: components_suite 00:23:00.161 Test: vtophys_malloc_test ...passed 00:23:00.161 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:23:00.161 EAL: Setting policy MPOL_PREFERRED for socket 0 00:23:00.161 EAL: Restoring previous memory policy: 4 00:23:00.161 EAL: Calling mem event callback 'spdk:(nil)' 00:23:00.161 EAL: request: mp_malloc_sync 00:23:00.161 EAL: No shared files mode enabled, IPC is disabled 00:23:00.161 EAL: Heap on socket 0 was expanded by 4MB 00:23:00.161 EAL: Calling mem event callback 'spdk:(nil)' 00:23:00.161 EAL: request: mp_malloc_sync 00:23:00.161 EAL: No shared files mode enabled, IPC is disabled 00:23:00.161 EAL: Heap on socket 0 was shrunk by 4MB 00:23:00.161 EAL: Trying to obtain current memory policy. 00:23:00.161 EAL: Setting policy MPOL_PREFERRED for socket 0 00:23:00.161 EAL: Restoring previous memory policy: 4 00:23:00.161 EAL: Calling mem event callback 'spdk:(nil)' 00:23:00.161 EAL: request: mp_malloc_sync 00:23:00.161 EAL: No shared files mode enabled, IPC is disabled 00:23:00.161 EAL: Heap on socket 0 was expanded by 6MB 00:23:00.161 EAL: Calling mem event callback 'spdk:(nil)' 00:23:00.161 EAL: request: mp_malloc_sync 00:23:00.161 EAL: No shared files mode enabled, IPC is disabled 00:23:00.161 EAL: Heap on socket 0 was shrunk by 6MB 00:23:00.161 EAL: Trying to obtain current memory policy. 00:23:00.161 EAL: Setting policy MPOL_PREFERRED for socket 0 00:23:00.161 EAL: Restoring previous memory policy: 4 00:23:00.161 EAL: Calling mem event callback 'spdk:(nil)' 00:23:00.161 EAL: request: mp_malloc_sync 00:23:00.161 EAL: No shared files mode enabled, IPC is disabled 00:23:00.161 EAL: Heap on socket 0 was expanded by 10MB 00:23:00.161 EAL: Calling mem event callback 'spdk:(nil)' 00:23:00.161 EAL: request: mp_malloc_sync 00:23:00.161 EAL: No shared files mode enabled, IPC is disabled 00:23:00.161 EAL: Heap on socket 0 was shrunk by 10MB 00:23:00.161 EAL: Trying to obtain current memory policy. 00:23:00.161 EAL: Setting policy MPOL_PREFERRED for socket 0 00:23:00.161 EAL: Restoring previous memory policy: 4 00:23:00.161 EAL: Calling mem event callback 'spdk:(nil)' 00:23:00.161 EAL: request: mp_malloc_sync 00:23:00.161 EAL: No shared files mode enabled, IPC is disabled 00:23:00.161 EAL: Heap on socket 0 was expanded by 18MB 00:23:00.161 EAL: Calling mem event callback 'spdk:(nil)' 00:23:00.161 EAL: request: mp_malloc_sync 00:23:00.161 EAL: No shared files mode enabled, IPC is disabled 00:23:00.161 EAL: Heap on socket 0 was shrunk by 18MB 00:23:00.161 EAL: Trying to obtain current memory policy. 00:23:00.161 EAL: Setting policy MPOL_PREFERRED for socket 0 00:23:00.161 EAL: Restoring previous memory policy: 4 00:23:00.161 EAL: Calling mem event callback 'spdk:(nil)' 00:23:00.161 EAL: request: mp_malloc_sync 00:23:00.161 EAL: No shared files mode enabled, IPC is disabled 00:23:00.161 EAL: Heap on socket 0 was expanded by 34MB 00:23:00.161 EAL: Calling mem event callback 'spdk:(nil)' 00:23:00.161 EAL: request: mp_malloc_sync 00:23:00.161 EAL: No shared files mode enabled, IPC is disabled 00:23:00.161 EAL: Heap on socket 0 was shrunk by 34MB 00:23:00.161 EAL: Trying to obtain current memory policy. 00:23:00.161 EAL: Setting policy MPOL_PREFERRED for socket 0 00:23:00.161 EAL: Restoring previous memory policy: 4 00:23:00.161 EAL: Calling mem event callback 'spdk:(nil)' 00:23:00.161 EAL: request: mp_malloc_sync 00:23:00.161 EAL: No shared files mode enabled, IPC is disabled 00:23:00.162 EAL: Heap on socket 0 was expanded by 66MB 00:23:00.419 EAL: Calling mem event callback 'spdk:(nil)' 00:23:00.419 EAL: request: mp_malloc_sync 00:23:00.420 EAL: No shared files mode enabled, IPC is disabled 00:23:00.420 EAL: Heap on socket 0 was shrunk by 66MB 00:23:00.420 EAL: Trying to obtain current memory policy. 00:23:00.420 EAL: Setting policy MPOL_PREFERRED for socket 0 00:23:00.420 EAL: Restoring previous memory policy: 4 00:23:00.420 EAL: Calling mem event callback 'spdk:(nil)' 00:23:00.420 EAL: request: mp_malloc_sync 00:23:00.420 EAL: No shared files mode enabled, IPC is disabled 00:23:00.420 EAL: Heap on socket 0 was expanded by 130MB 00:23:00.420 EAL: Calling mem event callback 'spdk:(nil)' 00:23:00.679 EAL: request: mp_malloc_sync 00:23:00.679 EAL: No shared files mode enabled, IPC is disabled 00:23:00.679 EAL: Heap on socket 0 was shrunk by 130MB 00:23:00.679 EAL: Trying to obtain current memory policy. 00:23:00.679 EAL: Setting policy MPOL_PREFERRED for socket 0 00:23:00.679 EAL: Restoring previous memory policy: 4 00:23:00.679 EAL: Calling mem event callback 'spdk:(nil)' 00:23:00.679 EAL: request: mp_malloc_sync 00:23:00.679 EAL: No shared files mode enabled, IPC is disabled 00:23:00.679 EAL: Heap on socket 0 was expanded by 258MB 00:23:00.937 EAL: Calling mem event callback 'spdk:(nil)' 00:23:00.937 EAL: request: mp_malloc_sync 00:23:00.937 EAL: No shared files mode enabled, IPC is disabled 00:23:00.937 EAL: Heap on socket 0 was shrunk by 258MB 00:23:01.195 EAL: Trying to obtain current memory policy. 00:23:01.195 EAL: Setting policy MPOL_PREFERRED for socket 0 00:23:01.454 EAL: Restoring previous memory policy: 4 00:23:01.454 EAL: Calling mem event callback 'spdk:(nil)' 00:23:01.454 EAL: request: mp_malloc_sync 00:23:01.454 EAL: No shared files mode enabled, IPC is disabled 00:23:01.454 EAL: Heap on socket 0 was expanded by 514MB 00:23:02.021 EAL: Calling mem event callback 'spdk:(nil)' 00:23:02.021 EAL: request: mp_malloc_sync 00:23:02.021 EAL: No shared files mode enabled, IPC is disabled 00:23:02.021 EAL: Heap on socket 0 was shrunk by 514MB 00:23:02.586 EAL: Trying to obtain current memory policy. 00:23:02.586 EAL: Setting policy MPOL_PREFERRED for socket 0 00:23:02.586 EAL: Restoring previous memory policy: 4 00:23:02.586 EAL: Calling mem event callback 'spdk:(nil)' 00:23:02.586 EAL: request: mp_malloc_sync 00:23:02.586 EAL: No shared files mode enabled, IPC is disabled 00:23:02.587 EAL: Heap on socket 0 was expanded by 1026MB 00:23:03.958 EAL: Calling mem event callback 'spdk:(nil)' 00:23:03.958 EAL: request: mp_malloc_sync 00:23:03.958 EAL: No shared files mode enabled, IPC is disabled 00:23:03.958 EAL: Heap on socket 0 was shrunk by 1026MB 00:23:04.894 passed 00:23:04.894 00:23:04.894 Run Summary: Type Total Ran Passed Failed Inactive 00:23:04.894 suites 1 1 n/a 0 0 00:23:04.894 tests 2 2 2 0 0 00:23:04.894 asserts 5817 5817 5817 0 n/a 00:23:04.894 00:23:04.894 Elapsed time = 5.165 seconds 00:23:04.894 EAL: Calling mem event callback 'spdk:(nil)' 00:23:04.894 EAL: request: mp_malloc_sync 00:23:04.894 EAL: No shared files mode enabled, IPC is disabled 00:23:04.894 EAL: Heap on socket 0 was shrunk by 2MB 00:23:04.894 EAL: No shared files mode enabled, IPC is disabled 00:23:04.894 EAL: No shared files mode enabled, IPC is disabled 00:23:04.894 EAL: No shared files mode enabled, IPC is disabled 00:23:04.894 00:23:04.894 real 0m5.421s 00:23:04.894 user 0m4.609s 00:23:04.894 sys 0m0.664s 00:23:04.894 ************************************ 00:23:04.894 END TEST env_vtophys 00:23:04.894 ************************************ 00:23:04.894 15:52:37 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:04.894 15:52:37 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:23:05.159 15:52:37 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:23:05.159 15:52:37 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:05.159 15:52:37 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:05.159 15:52:37 env -- common/autotest_common.sh@10 -- # set +x 00:23:05.159 ************************************ 00:23:05.159 START TEST env_pci 00:23:05.159 ************************************ 00:23:05.159 15:52:37 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:23:05.159 00:23:05.159 00:23:05.159 CUnit - A unit testing framework for C - Version 2.1-3 00:23:05.159 http://cunit.sourceforge.net/ 00:23:05.159 00:23:05.159 00:23:05.159 Suite: pci 00:23:05.159 Test: pci_hook ...[2024-11-05 15:52:37.359906] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 55942 has claimed it 00:23:05.159 passed 00:23:05.159 00:23:05.159 EAL: Cannot find device (10000:00:01.0) 00:23:05.159 EAL: Failed to attach device on primary process 00:23:05.159 Run Summary: Type Total Ran Passed Failed Inactive 00:23:05.159 suites 1 1 n/a 0 0 00:23:05.159 tests 1 1 1 0 0 00:23:05.159 asserts 25 25 25 0 n/a 00:23:05.159 00:23:05.159 Elapsed time = 0.005 seconds 00:23:05.159 00:23:05.159 real 0m0.069s 00:23:05.159 user 0m0.036s 00:23:05.159 sys 0m0.032s 00:23:05.159 15:52:37 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:05.159 15:52:37 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:23:05.159 ************************************ 00:23:05.159 END TEST env_pci 00:23:05.159 ************************************ 00:23:05.159 15:52:37 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:23:05.159 15:52:37 env -- env/env.sh@15 -- # uname 00:23:05.159 15:52:37 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:23:05.159 15:52:37 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:23:05.159 15:52:37 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:23:05.159 15:52:37 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:23:05.159 15:52:37 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:05.159 15:52:37 env -- common/autotest_common.sh@10 -- # set +x 00:23:05.159 ************************************ 00:23:05.159 START TEST env_dpdk_post_init 00:23:05.159 ************************************ 00:23:05.159 15:52:37 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:23:05.159 EAL: Detected CPU lcores: 10 00:23:05.159 EAL: Detected NUMA nodes: 1 00:23:05.159 EAL: Detected shared linkage of DPDK 00:23:05.159 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:23:05.159 EAL: Selected IOVA mode 'PA' 00:23:05.452 TELEMETRY: No legacy callbacks, legacy socket not created 00:23:05.452 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:23:05.452 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:23:05.452 Starting DPDK initialization... 00:23:05.452 Starting SPDK post initialization... 00:23:05.452 SPDK NVMe probe 00:23:05.452 Attaching to 0000:00:10.0 00:23:05.452 Attaching to 0000:00:11.0 00:23:05.452 Attached to 0000:00:10.0 00:23:05.452 Attached to 0000:00:11.0 00:23:05.452 Cleaning up... 00:23:05.452 00:23:05.452 real 0m0.226s 00:23:05.452 user 0m0.063s 00:23:05.452 sys 0m0.062s 00:23:05.452 15:52:37 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:05.452 ************************************ 00:23:05.452 END TEST env_dpdk_post_init 00:23:05.452 ************************************ 00:23:05.452 15:52:37 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:23:05.452 15:52:37 env -- env/env.sh@26 -- # uname 00:23:05.452 15:52:37 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:23:05.452 15:52:37 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:23:05.452 15:52:37 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:05.452 15:52:37 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:05.452 15:52:37 env -- common/autotest_common.sh@10 -- # set +x 00:23:05.452 ************************************ 00:23:05.452 START TEST env_mem_callbacks 00:23:05.452 ************************************ 00:23:05.452 15:52:37 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:23:05.452 EAL: Detected CPU lcores: 10 00:23:05.452 EAL: Detected NUMA nodes: 1 00:23:05.452 EAL: Detected shared linkage of DPDK 00:23:05.452 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:23:05.452 EAL: Selected IOVA mode 'PA' 00:23:05.711 TELEMETRY: No legacy callbacks, legacy socket not created 00:23:05.711 00:23:05.711 00:23:05.711 CUnit - A unit testing framework for C - Version 2.1-3 00:23:05.711 http://cunit.sourceforge.net/ 00:23:05.711 00:23:05.711 00:23:05.711 Suite: memory 00:23:05.711 Test: test ... 00:23:05.711 register 0x200000200000 2097152 00:23:05.711 malloc 3145728 00:23:05.711 register 0x200000400000 4194304 00:23:05.711 buf 0x2000004fffc0 len 3145728 PASSED 00:23:05.711 malloc 64 00:23:05.711 buf 0x2000004ffec0 len 64 PASSED 00:23:05.711 malloc 4194304 00:23:05.711 register 0x200000800000 6291456 00:23:05.711 buf 0x2000009fffc0 len 4194304 PASSED 00:23:05.711 free 0x2000004fffc0 3145728 00:23:05.711 free 0x2000004ffec0 64 00:23:05.711 unregister 0x200000400000 4194304 PASSED 00:23:05.711 free 0x2000009fffc0 4194304 00:23:05.711 unregister 0x200000800000 6291456 PASSED 00:23:05.711 malloc 8388608 00:23:05.711 register 0x200000400000 10485760 00:23:05.711 buf 0x2000005fffc0 len 8388608 PASSED 00:23:05.711 free 0x2000005fffc0 8388608 00:23:05.711 unregister 0x200000400000 10485760 PASSED 00:23:05.711 passed 00:23:05.711 00:23:05.711 Run Summary: Type Total Ran Passed Failed Inactive 00:23:05.711 suites 1 1 n/a 0 0 00:23:05.711 tests 1 1 1 0 0 00:23:05.711 asserts 15 15 15 0 n/a 00:23:05.711 00:23:05.711 Elapsed time = 0.041 seconds 00:23:05.711 00:23:05.711 real 0m0.223s 00:23:05.711 user 0m0.067s 00:23:05.711 sys 0m0.054s 00:23:05.711 15:52:37 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:05.711 ************************************ 00:23:05.711 END TEST env_mem_callbacks 00:23:05.711 ************************************ 00:23:05.711 15:52:37 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:23:05.711 00:23:05.711 real 0m6.542s 00:23:05.711 user 0m5.152s 00:23:05.711 sys 0m1.033s 00:23:05.711 15:52:37 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:05.711 15:52:37 env -- common/autotest_common.sh@10 -- # set +x 00:23:05.711 ************************************ 00:23:05.711 END TEST env 00:23:05.711 ************************************ 00:23:05.711 15:52:37 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:23:05.711 15:52:37 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:05.711 15:52:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:05.711 15:52:37 -- common/autotest_common.sh@10 -- # set +x 00:23:05.711 ************************************ 00:23:05.711 START TEST rpc 00:23:05.711 ************************************ 00:23:05.711 15:52:38 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:23:05.711 * Looking for test storage... 00:23:05.711 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:23:05.711 15:52:38 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:05.711 15:52:38 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:23:05.711 15:52:38 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:05.711 15:52:38 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:05.711 15:52:38 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:05.711 15:52:38 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:05.711 15:52:38 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:05.711 15:52:38 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:23:05.711 15:52:38 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:23:05.711 15:52:38 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:23:05.711 15:52:38 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:23:05.711 15:52:38 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:23:05.711 15:52:38 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:23:05.711 15:52:38 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:23:05.711 15:52:38 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:05.711 15:52:38 rpc -- scripts/common.sh@344 -- # case "$op" in 00:23:05.711 15:52:38 rpc -- scripts/common.sh@345 -- # : 1 00:23:05.711 15:52:38 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:05.970 15:52:38 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:05.970 15:52:38 rpc -- scripts/common.sh@365 -- # decimal 1 00:23:05.970 15:52:38 rpc -- scripts/common.sh@353 -- # local d=1 00:23:05.970 15:52:38 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:05.970 15:52:38 rpc -- scripts/common.sh@355 -- # echo 1 00:23:05.970 15:52:38 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:23:05.970 15:52:38 rpc -- scripts/common.sh@366 -- # decimal 2 00:23:05.970 15:52:38 rpc -- scripts/common.sh@353 -- # local d=2 00:23:05.970 15:52:38 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:05.970 15:52:38 rpc -- scripts/common.sh@355 -- # echo 2 00:23:05.970 15:52:38 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:23:05.970 15:52:38 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:05.970 15:52:38 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:05.970 15:52:38 rpc -- scripts/common.sh@368 -- # return 0 00:23:05.970 15:52:38 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:05.970 15:52:38 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:05.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.970 --rc genhtml_branch_coverage=1 00:23:05.970 --rc genhtml_function_coverage=1 00:23:05.970 --rc genhtml_legend=1 00:23:05.970 --rc geninfo_all_blocks=1 00:23:05.970 --rc geninfo_unexecuted_blocks=1 00:23:05.970 00:23:05.970 ' 00:23:05.970 15:52:38 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:05.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.970 --rc genhtml_branch_coverage=1 00:23:05.970 --rc genhtml_function_coverage=1 00:23:05.970 --rc genhtml_legend=1 00:23:05.970 --rc geninfo_all_blocks=1 00:23:05.970 --rc geninfo_unexecuted_blocks=1 00:23:05.970 00:23:05.970 ' 00:23:05.970 15:52:38 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:05.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.970 --rc genhtml_branch_coverage=1 00:23:05.970 --rc genhtml_function_coverage=1 00:23:05.970 --rc genhtml_legend=1 00:23:05.970 --rc geninfo_all_blocks=1 00:23:05.970 --rc geninfo_unexecuted_blocks=1 00:23:05.970 00:23:05.970 ' 00:23:05.970 15:52:38 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:05.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.970 --rc genhtml_branch_coverage=1 00:23:05.970 --rc genhtml_function_coverage=1 00:23:05.970 --rc genhtml_legend=1 00:23:05.970 --rc geninfo_all_blocks=1 00:23:05.970 --rc geninfo_unexecuted_blocks=1 00:23:05.970 00:23:05.970 ' 00:23:05.970 15:52:38 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56058 00:23:05.970 15:52:38 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:23:05.970 15:52:38 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56058 00:23:05.970 15:52:38 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:23:05.970 15:52:38 rpc -- common/autotest_common.sh@833 -- # '[' -z 56058 ']' 00:23:05.970 15:52:38 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.970 15:52:38 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:05.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.970 15:52:38 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.970 15:52:38 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:05.970 15:52:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:23:05.970 [2024-11-05 15:52:38.221367] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:23:05.970 [2024-11-05 15:52:38.221488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56058 ] 00:23:05.970 [2024-11-05 15:52:38.381060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.280 [2024-11-05 15:52:38.481651] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:23:06.280 [2024-11-05 15:52:38.481725] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56058' to capture a snapshot of events at runtime. 00:23:06.280 [2024-11-05 15:52:38.481736] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:06.280 [2024-11-05 15:52:38.481745] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:06.280 [2024-11-05 15:52:38.481753] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56058 for offline analysis/debug. 00:23:06.280 [2024-11-05 15:52:38.482616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.845 15:52:39 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:06.845 15:52:39 rpc -- common/autotest_common.sh@866 -- # return 0 00:23:06.845 15:52:39 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:23:06.845 15:52:39 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:23:06.845 15:52:39 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:23:06.845 15:52:39 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:23:06.845 15:52:39 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:06.845 15:52:39 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:06.845 15:52:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:23:06.845 ************************************ 00:23:06.845 START TEST rpc_integrity 00:23:06.845 ************************************ 00:23:06.845 15:52:39 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:23:06.845 15:52:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:06.845 15:52:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.845 15:52:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:06.845 15:52:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.845 15:52:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:23:06.845 15:52:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:23:06.845 15:52:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:23:06.845 15:52:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:23:06.845 15:52:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.845 15:52:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:06.845 15:52:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.845 15:52:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:23:06.845 15:52:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:23:06.845 15:52:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.845 15:52:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:06.845 15:52:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.845 15:52:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:23:06.845 { 00:23:06.845 "name": "Malloc0", 00:23:06.845 "aliases": [ 00:23:06.845 "3be0e60c-3745-408c-9e8f-e564fb1fc739" 00:23:06.845 ], 00:23:06.846 "product_name": "Malloc disk", 00:23:06.846 "block_size": 512, 00:23:06.846 "num_blocks": 16384, 00:23:06.846 "uuid": "3be0e60c-3745-408c-9e8f-e564fb1fc739", 00:23:06.846 "assigned_rate_limits": { 00:23:06.846 "rw_ios_per_sec": 0, 00:23:06.846 "rw_mbytes_per_sec": 0, 00:23:06.846 "r_mbytes_per_sec": 0, 00:23:06.846 "w_mbytes_per_sec": 0 00:23:06.846 }, 00:23:06.846 "claimed": false, 00:23:06.846 "zoned": false, 00:23:06.846 "supported_io_types": { 00:23:06.846 "read": true, 00:23:06.846 "write": true, 00:23:06.846 "unmap": true, 00:23:06.846 "flush": true, 00:23:06.846 "reset": true, 00:23:06.846 "nvme_admin": false, 00:23:06.846 "nvme_io": false, 00:23:06.846 "nvme_io_md": false, 00:23:06.846 "write_zeroes": true, 00:23:06.846 "zcopy": true, 00:23:06.846 "get_zone_info": false, 00:23:06.846 "zone_management": false, 00:23:06.846 "zone_append": false, 00:23:06.846 "compare": false, 00:23:06.846 "compare_and_write": false, 00:23:06.846 "abort": true, 00:23:06.846 "seek_hole": false, 00:23:06.846 "seek_data": false, 00:23:06.846 "copy": true, 00:23:06.846 "nvme_iov_md": false 00:23:06.846 }, 00:23:06.846 "memory_domains": [ 00:23:06.846 { 00:23:06.846 "dma_device_id": "system", 00:23:06.846 "dma_device_type": 1 00:23:06.846 }, 00:23:06.846 { 00:23:06.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:06.846 "dma_device_type": 2 00:23:06.846 } 00:23:06.846 ], 00:23:06.846 "driver_specific": {} 00:23:06.846 } 00:23:06.846 ]' 00:23:06.846 15:52:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:23:06.846 15:52:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:23:06.846 15:52:39 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:23:06.846 15:52:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.846 15:52:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:06.846 [2024-11-05 15:52:39.201587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:23:06.846 [2024-11-05 15:52:39.201652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:06.846 [2024-11-05 15:52:39.201675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:23:06.846 [2024-11-05 15:52:39.201706] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:06.846 [2024-11-05 15:52:39.203933] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:06.846 [2024-11-05 15:52:39.203972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:23:06.846 Passthru0 00:23:06.846 15:52:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.846 15:52:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:23:06.846 15:52:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.846 15:52:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:06.846 15:52:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.846 15:52:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:23:06.846 { 00:23:06.846 "name": "Malloc0", 00:23:06.846 "aliases": [ 00:23:06.846 "3be0e60c-3745-408c-9e8f-e564fb1fc739" 00:23:06.846 ], 00:23:06.846 "product_name": "Malloc disk", 00:23:06.846 "block_size": 512, 00:23:06.846 "num_blocks": 16384, 00:23:06.846 "uuid": "3be0e60c-3745-408c-9e8f-e564fb1fc739", 00:23:06.846 "assigned_rate_limits": { 00:23:06.846 "rw_ios_per_sec": 0, 00:23:06.846 "rw_mbytes_per_sec": 0, 00:23:06.846 "r_mbytes_per_sec": 0, 00:23:06.846 "w_mbytes_per_sec": 0 00:23:06.846 }, 00:23:06.846 "claimed": true, 00:23:06.846 "claim_type": "exclusive_write", 00:23:06.846 "zoned": false, 00:23:06.846 "supported_io_types": { 00:23:06.846 "read": true, 00:23:06.846 "write": true, 00:23:06.846 "unmap": true, 00:23:06.846 "flush": true, 00:23:06.846 "reset": true, 00:23:06.846 "nvme_admin": false, 00:23:06.846 "nvme_io": false, 00:23:06.846 "nvme_io_md": false, 00:23:06.846 "write_zeroes": true, 00:23:06.846 "zcopy": true, 00:23:06.846 "get_zone_info": false, 00:23:06.846 "zone_management": false, 00:23:06.846 "zone_append": false, 00:23:06.846 "compare": false, 00:23:06.846 "compare_and_write": false, 00:23:06.846 "abort": true, 00:23:06.846 "seek_hole": false, 00:23:06.846 "seek_data": false, 00:23:06.846 "copy": true, 00:23:06.846 "nvme_iov_md": false 00:23:06.846 }, 00:23:06.846 "memory_domains": [ 00:23:06.846 { 00:23:06.846 "dma_device_id": "system", 00:23:06.846 "dma_device_type": 1 00:23:06.846 }, 00:23:06.846 { 00:23:06.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:06.846 "dma_device_type": 2 00:23:06.846 } 00:23:06.846 ], 00:23:06.846 "driver_specific": {} 00:23:06.846 }, 00:23:06.846 { 00:23:06.846 "name": "Passthru0", 00:23:06.846 "aliases": [ 00:23:06.846 "4d9544ab-c62c-56b4-b2a2-d2ce557e525b" 00:23:06.846 ], 00:23:06.846 "product_name": "passthru", 00:23:06.846 "block_size": 512, 00:23:06.846 "num_blocks": 16384, 00:23:06.846 "uuid": "4d9544ab-c62c-56b4-b2a2-d2ce557e525b", 00:23:06.846 "assigned_rate_limits": { 00:23:06.846 "rw_ios_per_sec": 0, 00:23:06.846 "rw_mbytes_per_sec": 0, 00:23:06.846 "r_mbytes_per_sec": 0, 00:23:06.846 "w_mbytes_per_sec": 0 00:23:06.846 }, 00:23:06.846 "claimed": false, 00:23:06.846 "zoned": false, 00:23:06.846 "supported_io_types": { 00:23:06.846 "read": true, 00:23:06.846 "write": true, 00:23:06.846 "unmap": true, 00:23:06.846 "flush": true, 00:23:06.846 "reset": true, 00:23:06.846 "nvme_admin": false, 00:23:06.846 "nvme_io": false, 00:23:06.846 "nvme_io_md": false, 00:23:06.846 "write_zeroes": true, 00:23:06.846 "zcopy": true, 00:23:06.846 "get_zone_info": false, 00:23:06.846 "zone_management": false, 00:23:06.846 "zone_append": false, 00:23:06.846 "compare": false, 00:23:06.846 "compare_and_write": false, 00:23:06.846 "abort": true, 00:23:06.846 "seek_hole": false, 00:23:06.846 "seek_data": false, 00:23:06.846 "copy": true, 00:23:06.846 "nvme_iov_md": false 00:23:06.846 }, 00:23:06.846 "memory_domains": [ 00:23:06.846 { 00:23:06.846 "dma_device_id": "system", 00:23:06.846 "dma_device_type": 1 00:23:06.846 }, 00:23:06.846 { 00:23:06.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:06.846 "dma_device_type": 2 00:23:06.846 } 00:23:06.846 ], 00:23:06.846 "driver_specific": { 00:23:06.846 "passthru": { 00:23:06.846 "name": "Passthru0", 00:23:06.846 "base_bdev_name": "Malloc0" 00:23:06.846 } 00:23:06.846 } 00:23:06.846 } 00:23:06.846 ]' 00:23:06.846 15:52:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:23:06.846 15:52:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:23:06.846 15:52:39 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:23:06.846 15:52:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.846 15:52:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:07.106 15:52:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.106 15:52:39 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:07.106 15:52:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.106 15:52:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:07.106 15:52:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.106 15:52:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:23:07.106 15:52:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.106 15:52:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:07.106 15:52:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.106 15:52:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:23:07.106 15:52:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:23:07.106 15:52:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:23:07.106 00:23:07.106 real 0m0.240s 00:23:07.106 user 0m0.124s 00:23:07.106 sys 0m0.034s 00:23:07.106 15:52:39 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:07.106 ************************************ 00:23:07.106 END TEST rpc_integrity 00:23:07.106 ************************************ 00:23:07.106 15:52:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:07.106 15:52:39 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:23:07.106 15:52:39 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:07.106 15:52:39 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:07.106 15:52:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:23:07.106 ************************************ 00:23:07.106 START TEST rpc_plugins 00:23:07.106 ************************************ 00:23:07.106 15:52:39 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:23:07.106 15:52:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:23:07.106 15:52:39 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.106 15:52:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:23:07.106 15:52:39 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.106 15:52:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:23:07.106 15:52:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:23:07.106 15:52:39 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.106 15:52:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:23:07.106 15:52:39 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.106 15:52:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:23:07.106 { 00:23:07.106 "name": "Malloc1", 00:23:07.106 "aliases": [ 00:23:07.106 "6263810b-2e63-47f1-8887-66d747a4b0e9" 00:23:07.106 ], 00:23:07.106 "product_name": "Malloc disk", 00:23:07.106 "block_size": 4096, 00:23:07.106 "num_blocks": 256, 00:23:07.106 "uuid": "6263810b-2e63-47f1-8887-66d747a4b0e9", 00:23:07.106 "assigned_rate_limits": { 00:23:07.106 "rw_ios_per_sec": 0, 00:23:07.106 "rw_mbytes_per_sec": 0, 00:23:07.106 "r_mbytes_per_sec": 0, 00:23:07.106 "w_mbytes_per_sec": 0 00:23:07.106 }, 00:23:07.106 "claimed": false, 00:23:07.106 "zoned": false, 00:23:07.106 "supported_io_types": { 00:23:07.106 "read": true, 00:23:07.106 "write": true, 00:23:07.106 "unmap": true, 00:23:07.106 "flush": true, 00:23:07.106 "reset": true, 00:23:07.106 "nvme_admin": false, 00:23:07.106 "nvme_io": false, 00:23:07.106 "nvme_io_md": false, 00:23:07.106 "write_zeroes": true, 00:23:07.106 "zcopy": true, 00:23:07.106 "get_zone_info": false, 00:23:07.106 "zone_management": false, 00:23:07.106 "zone_append": false, 00:23:07.106 "compare": false, 00:23:07.106 "compare_and_write": false, 00:23:07.106 "abort": true, 00:23:07.106 "seek_hole": false, 00:23:07.106 "seek_data": false, 00:23:07.106 "copy": true, 00:23:07.106 "nvme_iov_md": false 00:23:07.106 }, 00:23:07.106 "memory_domains": [ 00:23:07.106 { 00:23:07.106 "dma_device_id": "system", 00:23:07.106 "dma_device_type": 1 00:23:07.106 }, 00:23:07.106 { 00:23:07.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:07.106 "dma_device_type": 2 00:23:07.106 } 00:23:07.106 ], 00:23:07.106 "driver_specific": {} 00:23:07.106 } 00:23:07.106 ]' 00:23:07.106 15:52:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:23:07.107 15:52:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:23:07.107 15:52:39 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:23:07.107 15:52:39 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.107 15:52:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:23:07.107 15:52:39 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.107 15:52:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:23:07.107 15:52:39 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.107 15:52:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:23:07.107 15:52:39 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.107 15:52:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:23:07.107 15:52:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:23:07.107 15:52:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:23:07.107 00:23:07.107 real 0m0.111s 00:23:07.107 user 0m0.065s 00:23:07.107 sys 0m0.016s 00:23:07.107 ************************************ 00:23:07.107 END TEST rpc_plugins 00:23:07.107 ************************************ 00:23:07.107 15:52:39 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:07.107 15:52:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:23:07.107 15:52:39 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:23:07.107 15:52:39 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:07.107 15:52:39 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:07.107 15:52:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:23:07.107 ************************************ 00:23:07.107 START TEST rpc_trace_cmd_test 00:23:07.107 ************************************ 00:23:07.107 15:52:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:23:07.107 15:52:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:23:07.107 15:52:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:23:07.107 15:52:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.107 15:52:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.366 15:52:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.366 15:52:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:23:07.366 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56058", 00:23:07.366 "tpoint_group_mask": "0x8", 00:23:07.366 "iscsi_conn": { 00:23:07.366 "mask": "0x2", 00:23:07.366 "tpoint_mask": "0x0" 00:23:07.366 }, 00:23:07.366 "scsi": { 00:23:07.366 "mask": "0x4", 00:23:07.366 "tpoint_mask": "0x0" 00:23:07.366 }, 00:23:07.366 "bdev": { 00:23:07.366 "mask": "0x8", 00:23:07.366 "tpoint_mask": "0xffffffffffffffff" 00:23:07.366 }, 00:23:07.366 "nvmf_rdma": { 00:23:07.366 "mask": "0x10", 00:23:07.366 "tpoint_mask": "0x0" 00:23:07.366 }, 00:23:07.366 "nvmf_tcp": { 00:23:07.366 "mask": "0x20", 00:23:07.366 "tpoint_mask": "0x0" 00:23:07.366 }, 00:23:07.366 "ftl": { 00:23:07.366 "mask": "0x40", 00:23:07.366 "tpoint_mask": "0x0" 00:23:07.366 }, 00:23:07.366 "blobfs": { 00:23:07.366 "mask": "0x80", 00:23:07.366 "tpoint_mask": "0x0" 00:23:07.366 }, 00:23:07.366 "dsa": { 00:23:07.366 "mask": "0x200", 00:23:07.366 "tpoint_mask": "0x0" 00:23:07.366 }, 00:23:07.366 "thread": { 00:23:07.366 "mask": "0x400", 00:23:07.366 "tpoint_mask": "0x0" 00:23:07.366 }, 00:23:07.366 "nvme_pcie": { 00:23:07.366 "mask": "0x800", 00:23:07.366 "tpoint_mask": "0x0" 00:23:07.366 }, 00:23:07.366 "iaa": { 00:23:07.366 "mask": "0x1000", 00:23:07.366 "tpoint_mask": "0x0" 00:23:07.366 }, 00:23:07.366 "nvme_tcp": { 00:23:07.366 "mask": "0x2000", 00:23:07.366 "tpoint_mask": "0x0" 00:23:07.366 }, 00:23:07.366 "bdev_nvme": { 00:23:07.366 "mask": "0x4000", 00:23:07.366 "tpoint_mask": "0x0" 00:23:07.366 }, 00:23:07.366 "sock": { 00:23:07.366 "mask": "0x8000", 00:23:07.366 "tpoint_mask": "0x0" 00:23:07.366 }, 00:23:07.366 "blob": { 00:23:07.366 "mask": "0x10000", 00:23:07.366 "tpoint_mask": "0x0" 00:23:07.366 }, 00:23:07.366 "bdev_raid": { 00:23:07.366 "mask": "0x20000", 00:23:07.366 "tpoint_mask": "0x0" 00:23:07.366 }, 00:23:07.366 "scheduler": { 00:23:07.366 "mask": "0x40000", 00:23:07.366 "tpoint_mask": "0x0" 00:23:07.366 } 00:23:07.366 }' 00:23:07.366 15:52:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:23:07.366 15:52:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:23:07.366 15:52:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:23:07.366 15:52:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:23:07.366 15:52:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:23:07.366 15:52:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:23:07.366 15:52:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:23:07.366 15:52:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:23:07.366 15:52:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:23:07.366 15:52:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:23:07.366 00:23:07.366 real 0m0.161s 00:23:07.366 user 0m0.132s 00:23:07.366 sys 0m0.022s 00:23:07.366 ************************************ 00:23:07.366 15:52:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:07.366 15:52:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.366 END TEST rpc_trace_cmd_test 00:23:07.366 ************************************ 00:23:07.366 15:52:39 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:23:07.366 15:52:39 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:23:07.366 15:52:39 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:23:07.366 15:52:39 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:07.366 15:52:39 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:07.366 15:52:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:23:07.366 ************************************ 00:23:07.366 START TEST rpc_daemon_integrity 00:23:07.366 ************************************ 00:23:07.366 15:52:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:23:07.366 15:52:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:07.366 15:52:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.366 15:52:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:07.366 15:52:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.366 15:52:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:23:07.366 15:52:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:23:07.366 15:52:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:23:07.366 15:52:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:23:07.366 15:52:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.366 15:52:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:07.366 15:52:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.366 15:52:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:23:07.366 15:52:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:23:07.366 15:52:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.366 15:52:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:07.625 15:52:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.625 15:52:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:23:07.625 { 00:23:07.625 "name": "Malloc2", 00:23:07.625 "aliases": [ 00:23:07.625 "66cc27cc-4e60-4f4e-a5e7-737248cc4b7b" 00:23:07.625 ], 00:23:07.625 "product_name": "Malloc disk", 00:23:07.625 "block_size": 512, 00:23:07.625 "num_blocks": 16384, 00:23:07.625 "uuid": "66cc27cc-4e60-4f4e-a5e7-737248cc4b7b", 00:23:07.625 "assigned_rate_limits": { 00:23:07.625 "rw_ios_per_sec": 0, 00:23:07.625 "rw_mbytes_per_sec": 0, 00:23:07.625 "r_mbytes_per_sec": 0, 00:23:07.625 "w_mbytes_per_sec": 0 00:23:07.625 }, 00:23:07.625 "claimed": false, 00:23:07.625 "zoned": false, 00:23:07.625 "supported_io_types": { 00:23:07.625 "read": true, 00:23:07.625 "write": true, 00:23:07.625 "unmap": true, 00:23:07.625 "flush": true, 00:23:07.625 "reset": true, 00:23:07.625 "nvme_admin": false, 00:23:07.625 "nvme_io": false, 00:23:07.625 "nvme_io_md": false, 00:23:07.625 "write_zeroes": true, 00:23:07.625 "zcopy": true, 00:23:07.625 "get_zone_info": false, 00:23:07.625 "zone_management": false, 00:23:07.625 "zone_append": false, 00:23:07.625 "compare": false, 00:23:07.625 "compare_and_write": false, 00:23:07.625 "abort": true, 00:23:07.625 "seek_hole": false, 00:23:07.625 "seek_data": false, 00:23:07.625 "copy": true, 00:23:07.626 "nvme_iov_md": false 00:23:07.626 }, 00:23:07.626 "memory_domains": [ 00:23:07.626 { 00:23:07.626 "dma_device_id": "system", 00:23:07.626 "dma_device_type": 1 00:23:07.626 }, 00:23:07.626 { 00:23:07.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:07.626 "dma_device_type": 2 00:23:07.626 } 00:23:07.626 ], 00:23:07.626 "driver_specific": {} 00:23:07.626 } 00:23:07.626 ]' 00:23:07.626 15:52:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:23:07.626 15:52:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:23:07.626 15:52:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:23:07.626 15:52:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.626 15:52:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:07.626 [2024-11-05 15:52:39.829585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:23:07.626 [2024-11-05 15:52:39.829654] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:07.626 [2024-11-05 15:52:39.829675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:07.626 [2024-11-05 15:52:39.829696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:07.626 [2024-11-05 15:52:39.831890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:07.626 [2024-11-05 15:52:39.831929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:23:07.626 Passthru0 00:23:07.626 15:52:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.626 15:52:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:23:07.626 15:52:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.626 15:52:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:07.626 15:52:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.626 15:52:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:23:07.626 { 00:23:07.626 "name": "Malloc2", 00:23:07.626 "aliases": [ 00:23:07.626 "66cc27cc-4e60-4f4e-a5e7-737248cc4b7b" 00:23:07.626 ], 00:23:07.626 "product_name": "Malloc disk", 00:23:07.626 "block_size": 512, 00:23:07.626 "num_blocks": 16384, 00:23:07.626 "uuid": "66cc27cc-4e60-4f4e-a5e7-737248cc4b7b", 00:23:07.626 "assigned_rate_limits": { 00:23:07.626 "rw_ios_per_sec": 0, 00:23:07.626 "rw_mbytes_per_sec": 0, 00:23:07.626 "r_mbytes_per_sec": 0, 00:23:07.626 "w_mbytes_per_sec": 0 00:23:07.626 }, 00:23:07.626 "claimed": true, 00:23:07.626 "claim_type": "exclusive_write", 00:23:07.626 "zoned": false, 00:23:07.626 "supported_io_types": { 00:23:07.626 "read": true, 00:23:07.626 "write": true, 00:23:07.626 "unmap": true, 00:23:07.626 "flush": true, 00:23:07.626 "reset": true, 00:23:07.626 "nvme_admin": false, 00:23:07.626 "nvme_io": false, 00:23:07.626 "nvme_io_md": false, 00:23:07.626 "write_zeroes": true, 00:23:07.626 "zcopy": true, 00:23:07.626 "get_zone_info": false, 00:23:07.626 "zone_management": false, 00:23:07.626 "zone_append": false, 00:23:07.626 "compare": false, 00:23:07.626 "compare_and_write": false, 00:23:07.626 "abort": true, 00:23:07.626 "seek_hole": false, 00:23:07.626 "seek_data": false, 00:23:07.626 "copy": true, 00:23:07.626 "nvme_iov_md": false 00:23:07.626 }, 00:23:07.626 "memory_domains": [ 00:23:07.626 { 00:23:07.626 "dma_device_id": "system", 00:23:07.626 "dma_device_type": 1 00:23:07.626 }, 00:23:07.626 { 00:23:07.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:07.626 "dma_device_type": 2 00:23:07.626 } 00:23:07.626 ], 00:23:07.626 "driver_specific": {} 00:23:07.626 }, 00:23:07.626 { 00:23:07.626 "name": "Passthru0", 00:23:07.626 "aliases": [ 00:23:07.626 "7361f768-d1a9-5bea-abad-0d01202fd1cc" 00:23:07.626 ], 00:23:07.626 "product_name": "passthru", 00:23:07.626 "block_size": 512, 00:23:07.626 "num_blocks": 16384, 00:23:07.626 "uuid": "7361f768-d1a9-5bea-abad-0d01202fd1cc", 00:23:07.626 "assigned_rate_limits": { 00:23:07.626 "rw_ios_per_sec": 0, 00:23:07.626 "rw_mbytes_per_sec": 0, 00:23:07.626 "r_mbytes_per_sec": 0, 00:23:07.626 "w_mbytes_per_sec": 0 00:23:07.626 }, 00:23:07.626 "claimed": false, 00:23:07.626 "zoned": false, 00:23:07.626 "supported_io_types": { 00:23:07.626 "read": true, 00:23:07.626 "write": true, 00:23:07.626 "unmap": true, 00:23:07.626 "flush": true, 00:23:07.626 "reset": true, 00:23:07.626 "nvme_admin": false, 00:23:07.626 "nvme_io": false, 00:23:07.626 "nvme_io_md": false, 00:23:07.626 "write_zeroes": true, 00:23:07.626 "zcopy": true, 00:23:07.626 "get_zone_info": false, 00:23:07.626 "zone_management": false, 00:23:07.626 "zone_append": false, 00:23:07.626 "compare": false, 00:23:07.626 "compare_and_write": false, 00:23:07.626 "abort": true, 00:23:07.626 "seek_hole": false, 00:23:07.626 "seek_data": false, 00:23:07.626 "copy": true, 00:23:07.626 "nvme_iov_md": false 00:23:07.626 }, 00:23:07.626 "memory_domains": [ 00:23:07.626 { 00:23:07.626 "dma_device_id": "system", 00:23:07.626 "dma_device_type": 1 00:23:07.626 }, 00:23:07.626 { 00:23:07.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:07.626 "dma_device_type": 2 00:23:07.626 } 00:23:07.626 ], 00:23:07.626 "driver_specific": { 00:23:07.626 "passthru": { 00:23:07.626 "name": "Passthru0", 00:23:07.626 "base_bdev_name": "Malloc2" 00:23:07.626 } 00:23:07.626 } 00:23:07.626 } 00:23:07.626 ]' 00:23:07.626 15:52:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:23:07.626 15:52:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:23:07.626 15:52:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:23:07.626 15:52:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.626 15:52:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:07.626 15:52:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.626 15:52:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:23:07.626 15:52:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.626 15:52:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:07.626 15:52:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.626 15:52:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:23:07.626 15:52:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.626 15:52:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:07.626 15:52:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.626 15:52:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:23:07.626 15:52:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:23:07.626 15:52:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:23:07.626 00:23:07.626 real 0m0.242s 00:23:07.626 user 0m0.130s 00:23:07.626 sys 0m0.028s 00:23:07.626 ************************************ 00:23:07.626 END TEST rpc_daemon_integrity 00:23:07.626 ************************************ 00:23:07.626 15:52:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:07.626 15:52:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:07.626 15:52:39 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:07.626 15:52:39 rpc -- rpc/rpc.sh@84 -- # killprocess 56058 00:23:07.626 15:52:39 rpc -- common/autotest_common.sh@952 -- # '[' -z 56058 ']' 00:23:07.626 15:52:39 rpc -- common/autotest_common.sh@956 -- # kill -0 56058 00:23:07.626 15:52:39 rpc -- common/autotest_common.sh@957 -- # uname 00:23:07.626 15:52:39 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:07.626 15:52:39 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56058 00:23:07.626 15:52:40 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:07.626 15:52:40 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:07.626 killing process with pid 56058 00:23:07.626 15:52:40 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56058' 00:23:07.626 15:52:40 rpc -- common/autotest_common.sh@971 -- # kill 56058 00:23:07.626 15:52:40 rpc -- common/autotest_common.sh@976 -- # wait 56058 00:23:09.534 00:23:09.534 real 0m3.421s 00:23:09.534 user 0m3.847s 00:23:09.534 sys 0m0.594s 00:23:09.534 15:52:41 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:09.534 15:52:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:23:09.534 ************************************ 00:23:09.534 END TEST rpc 00:23:09.534 ************************************ 00:23:09.535 15:52:41 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:23:09.535 15:52:41 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:09.535 15:52:41 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:09.535 15:52:41 -- common/autotest_common.sh@10 -- # set +x 00:23:09.535 ************************************ 00:23:09.535 START TEST skip_rpc 00:23:09.535 ************************************ 00:23:09.535 15:52:41 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:23:09.535 * Looking for test storage... 00:23:09.535 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:23:09.535 15:52:41 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:09.535 15:52:41 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:23:09.535 15:52:41 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:09.535 15:52:41 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:09.535 15:52:41 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:09.535 15:52:41 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:09.535 15:52:41 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:09.535 15:52:41 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:23:09.535 15:52:41 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:23:09.535 15:52:41 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:23:09.535 15:52:41 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:23:09.535 15:52:41 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:23:09.535 15:52:41 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:23:09.535 15:52:41 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:23:09.535 15:52:41 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:09.535 15:52:41 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:23:09.535 15:52:41 skip_rpc -- scripts/common.sh@345 -- # : 1 00:23:09.535 15:52:41 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:09.535 15:52:41 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:09.535 15:52:41 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:23:09.535 15:52:41 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:23:09.535 15:52:41 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:09.535 15:52:41 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:23:09.535 15:52:41 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:23:09.535 15:52:41 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:23:09.535 15:52:41 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:23:09.535 15:52:41 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:09.535 15:52:41 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:23:09.535 15:52:41 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:23:09.535 15:52:41 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:09.535 15:52:41 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:09.535 15:52:41 skip_rpc -- scripts/common.sh@368 -- # return 0 00:23:09.535 15:52:41 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:09.535 15:52:41 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:09.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.535 --rc genhtml_branch_coverage=1 00:23:09.535 --rc genhtml_function_coverage=1 00:23:09.535 --rc genhtml_legend=1 00:23:09.535 --rc geninfo_all_blocks=1 00:23:09.535 --rc geninfo_unexecuted_blocks=1 00:23:09.535 00:23:09.535 ' 00:23:09.535 15:52:41 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:09.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.535 --rc genhtml_branch_coverage=1 00:23:09.535 --rc genhtml_function_coverage=1 00:23:09.535 --rc genhtml_legend=1 00:23:09.535 --rc geninfo_all_blocks=1 00:23:09.535 --rc geninfo_unexecuted_blocks=1 00:23:09.535 00:23:09.535 ' 00:23:09.535 15:52:41 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:09.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.535 --rc genhtml_branch_coverage=1 00:23:09.535 --rc genhtml_function_coverage=1 00:23:09.535 --rc genhtml_legend=1 00:23:09.535 --rc geninfo_all_blocks=1 00:23:09.535 --rc geninfo_unexecuted_blocks=1 00:23:09.535 00:23:09.535 ' 00:23:09.535 15:52:41 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:09.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.535 --rc genhtml_branch_coverage=1 00:23:09.535 --rc genhtml_function_coverage=1 00:23:09.535 --rc genhtml_legend=1 00:23:09.535 --rc geninfo_all_blocks=1 00:23:09.535 --rc geninfo_unexecuted_blocks=1 00:23:09.535 00:23:09.535 ' 00:23:09.535 15:52:41 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:23:09.535 15:52:41 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:23:09.535 15:52:41 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:23:09.535 15:52:41 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:09.535 15:52:41 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:09.535 15:52:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:09.535 ************************************ 00:23:09.535 START TEST skip_rpc 00:23:09.535 ************************************ 00:23:09.535 15:52:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:23:09.535 15:52:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56276 00:23:09.535 15:52:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:23:09.535 15:52:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:23:09.535 15:52:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:23:09.535 [2024-11-05 15:52:41.698239] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:23:09.535 [2024-11-05 15:52:41.698395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56276 ] 00:23:09.535 [2024-11-05 15:52:41.868111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.793 [2024-11-05 15:52:41.969814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.060 15:52:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:23:15.060 15:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:23:15.060 15:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:23:15.060 15:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:15.060 15:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:15.060 15:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:15.060 15:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:15.060 15:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:23:15.060 15:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.060 15:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:15.060 15:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:15.060 15:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:23:15.060 15:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:15.060 15:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:15.060 15:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:15.060 15:52:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:23:15.060 15:52:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56276 00:23:15.060 15:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 56276 ']' 00:23:15.060 15:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 56276 00:23:15.060 15:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:23:15.060 15:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:15.060 15:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56276 00:23:15.060 15:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:15.060 15:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:15.060 killing process with pid 56276 00:23:15.060 15:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56276' 00:23:15.060 15:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 56276 00:23:15.060 15:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 56276 00:23:15.643 00:23:15.643 real 0m6.247s 00:23:15.643 user 0m5.861s 00:23:15.643 sys 0m0.275s 00:23:15.643 15:52:47 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:15.643 ************************************ 00:23:15.643 15:52:47 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:15.643 END TEST skip_rpc 00:23:15.643 ************************************ 00:23:15.643 15:52:47 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:23:15.643 15:52:47 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:15.643 15:52:47 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:15.643 15:52:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:15.643 ************************************ 00:23:15.643 START TEST skip_rpc_with_json 00:23:15.643 ************************************ 00:23:15.643 15:52:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:23:15.643 15:52:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:23:15.643 15:52:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=56369 00:23:15.643 15:52:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:23:15.643 15:52:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 56369 00:23:15.643 15:52:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 56369 ']' 00:23:15.643 15:52:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:15.643 15:52:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.643 15:52:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:15.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.643 15:52:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.643 15:52:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:15.643 15:52:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:23:15.643 [2024-11-05 15:52:47.954738] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:23:15.643 [2024-11-05 15:52:47.954850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56369 ] 00:23:15.901 [2024-11-05 15:52:48.099537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.901 [2024-11-05 15:52:48.186812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.465 15:52:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:16.465 15:52:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:23:16.465 15:52:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:23:16.465 15:52:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.465 15:52:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:23:16.465 [2024-11-05 15:52:48.853568] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:23:16.465 request: 00:23:16.465 { 00:23:16.465 "trtype": "tcp", 00:23:16.465 "method": "nvmf_get_transports", 00:23:16.465 "req_id": 1 00:23:16.465 } 00:23:16.465 Got JSON-RPC error response 00:23:16.465 response: 00:23:16.465 { 00:23:16.465 "code": -19, 00:23:16.465 "message": "No such device" 00:23:16.465 } 00:23:16.465 15:52:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:16.465 15:52:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:23:16.465 15:52:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.465 15:52:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:23:16.465 [2024-11-05 15:52:48.861667] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:16.465 15:52:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.465 15:52:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:23:16.465 15:52:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.465 15:52:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:23:16.723 15:52:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.723 15:52:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:23:16.723 { 00:23:16.723 "subsystems": [ 00:23:16.723 { 00:23:16.723 "subsystem": "fsdev", 00:23:16.723 "config": [ 00:23:16.723 { 00:23:16.723 "method": "fsdev_set_opts", 00:23:16.723 "params": { 00:23:16.723 "fsdev_io_pool_size": 65535, 00:23:16.723 "fsdev_io_cache_size": 256 00:23:16.723 } 00:23:16.723 } 00:23:16.723 ] 00:23:16.723 }, 00:23:16.723 { 00:23:16.723 "subsystem": "keyring", 00:23:16.723 "config": [] 00:23:16.723 }, 00:23:16.723 { 00:23:16.723 "subsystem": "iobuf", 00:23:16.723 "config": [ 00:23:16.723 { 00:23:16.723 "method": "iobuf_set_options", 00:23:16.723 "params": { 00:23:16.723 "small_pool_count": 8192, 00:23:16.723 "large_pool_count": 1024, 00:23:16.723 "small_bufsize": 8192, 00:23:16.723 "large_bufsize": 135168, 00:23:16.723 "enable_numa": false 00:23:16.723 } 00:23:16.723 } 00:23:16.723 ] 00:23:16.723 }, 00:23:16.723 { 00:23:16.723 "subsystem": "sock", 00:23:16.723 "config": [ 00:23:16.723 { 00:23:16.723 "method": "sock_set_default_impl", 00:23:16.723 "params": { 00:23:16.723 "impl_name": "posix" 00:23:16.723 } 00:23:16.723 }, 00:23:16.723 { 00:23:16.723 "method": "sock_impl_set_options", 00:23:16.723 "params": { 00:23:16.723 "impl_name": "ssl", 00:23:16.723 "recv_buf_size": 4096, 00:23:16.723 "send_buf_size": 4096, 00:23:16.723 "enable_recv_pipe": true, 00:23:16.723 "enable_quickack": false, 00:23:16.723 "enable_placement_id": 0, 00:23:16.723 "enable_zerocopy_send_server": true, 00:23:16.723 "enable_zerocopy_send_client": false, 00:23:16.723 "zerocopy_threshold": 0, 00:23:16.723 "tls_version": 0, 00:23:16.723 "enable_ktls": false 00:23:16.723 } 00:23:16.723 }, 00:23:16.723 { 00:23:16.723 "method": "sock_impl_set_options", 00:23:16.723 "params": { 00:23:16.723 "impl_name": "posix", 00:23:16.723 "recv_buf_size": 2097152, 00:23:16.723 "send_buf_size": 2097152, 00:23:16.723 "enable_recv_pipe": true, 00:23:16.723 "enable_quickack": false, 00:23:16.723 "enable_placement_id": 0, 00:23:16.723 "enable_zerocopy_send_server": true, 00:23:16.723 "enable_zerocopy_send_client": false, 00:23:16.723 "zerocopy_threshold": 0, 00:23:16.723 "tls_version": 0, 00:23:16.723 "enable_ktls": false 00:23:16.723 } 00:23:16.723 } 00:23:16.723 ] 00:23:16.723 }, 00:23:16.723 { 00:23:16.723 "subsystem": "vmd", 00:23:16.723 "config": [] 00:23:16.723 }, 00:23:16.723 { 00:23:16.723 "subsystem": "accel", 00:23:16.723 "config": [ 00:23:16.723 { 00:23:16.723 "method": "accel_set_options", 00:23:16.723 "params": { 00:23:16.723 "small_cache_size": 128, 00:23:16.723 "large_cache_size": 16, 00:23:16.723 "task_count": 2048, 00:23:16.723 "sequence_count": 2048, 00:23:16.723 "buf_count": 2048 00:23:16.723 } 00:23:16.723 } 00:23:16.723 ] 00:23:16.723 }, 00:23:16.723 { 00:23:16.723 "subsystem": "bdev", 00:23:16.723 "config": [ 00:23:16.723 { 00:23:16.723 "method": "bdev_set_options", 00:23:16.723 "params": { 00:23:16.723 "bdev_io_pool_size": 65535, 00:23:16.723 "bdev_io_cache_size": 256, 00:23:16.723 "bdev_auto_examine": true, 00:23:16.723 "iobuf_small_cache_size": 128, 00:23:16.723 "iobuf_large_cache_size": 16 00:23:16.723 } 00:23:16.723 }, 00:23:16.723 { 00:23:16.723 "method": "bdev_raid_set_options", 00:23:16.723 "params": { 00:23:16.723 "process_window_size_kb": 1024, 00:23:16.723 "process_max_bandwidth_mb_sec": 0 00:23:16.723 } 00:23:16.723 }, 00:23:16.723 { 00:23:16.723 "method": "bdev_iscsi_set_options", 00:23:16.723 "params": { 00:23:16.723 "timeout_sec": 30 00:23:16.723 } 00:23:16.723 }, 00:23:16.723 { 00:23:16.723 "method": "bdev_nvme_set_options", 00:23:16.723 "params": { 00:23:16.723 "action_on_timeout": "none", 00:23:16.723 "timeout_us": 0, 00:23:16.723 "timeout_admin_us": 0, 00:23:16.723 "keep_alive_timeout_ms": 10000, 00:23:16.723 "arbitration_burst": 0, 00:23:16.723 "low_priority_weight": 0, 00:23:16.723 "medium_priority_weight": 0, 00:23:16.723 "high_priority_weight": 0, 00:23:16.723 "nvme_adminq_poll_period_us": 10000, 00:23:16.723 "nvme_ioq_poll_period_us": 0, 00:23:16.723 "io_queue_requests": 0, 00:23:16.723 "delay_cmd_submit": true, 00:23:16.723 "transport_retry_count": 4, 00:23:16.723 "bdev_retry_count": 3, 00:23:16.723 "transport_ack_timeout": 0, 00:23:16.723 "ctrlr_loss_timeout_sec": 0, 00:23:16.723 "reconnect_delay_sec": 0, 00:23:16.723 "fast_io_fail_timeout_sec": 0, 00:23:16.723 "disable_auto_failback": false, 00:23:16.723 "generate_uuids": false, 00:23:16.723 "transport_tos": 0, 00:23:16.723 "nvme_error_stat": false, 00:23:16.723 "rdma_srq_size": 0, 00:23:16.723 "io_path_stat": false, 00:23:16.723 "allow_accel_sequence": false, 00:23:16.723 "rdma_max_cq_size": 0, 00:23:16.723 "rdma_cm_event_timeout_ms": 0, 00:23:16.723 "dhchap_digests": [ 00:23:16.724 "sha256", 00:23:16.724 "sha384", 00:23:16.724 "sha512" 00:23:16.724 ], 00:23:16.724 "dhchap_dhgroups": [ 00:23:16.724 "null", 00:23:16.724 "ffdhe2048", 00:23:16.724 "ffdhe3072", 00:23:16.724 "ffdhe4096", 00:23:16.724 "ffdhe6144", 00:23:16.724 "ffdhe8192" 00:23:16.724 ] 00:23:16.724 } 00:23:16.724 }, 00:23:16.724 { 00:23:16.724 "method": "bdev_nvme_set_hotplug", 00:23:16.724 "params": { 00:23:16.724 "period_us": 100000, 00:23:16.724 "enable": false 00:23:16.724 } 00:23:16.724 }, 00:23:16.724 { 00:23:16.724 "method": "bdev_wait_for_examine" 00:23:16.724 } 00:23:16.724 ] 00:23:16.724 }, 00:23:16.724 { 00:23:16.724 "subsystem": "scsi", 00:23:16.724 "config": null 00:23:16.724 }, 00:23:16.724 { 00:23:16.724 "subsystem": "scheduler", 00:23:16.724 "config": [ 00:23:16.724 { 00:23:16.724 "method": "framework_set_scheduler", 00:23:16.724 "params": { 00:23:16.724 "name": "static" 00:23:16.724 } 00:23:16.724 } 00:23:16.724 ] 00:23:16.724 }, 00:23:16.724 { 00:23:16.724 "subsystem": "vhost_scsi", 00:23:16.724 "config": [] 00:23:16.724 }, 00:23:16.724 { 00:23:16.724 "subsystem": "vhost_blk", 00:23:16.724 "config": [] 00:23:16.724 }, 00:23:16.724 { 00:23:16.724 "subsystem": "ublk", 00:23:16.724 "config": [] 00:23:16.724 }, 00:23:16.724 { 00:23:16.724 "subsystem": "nbd", 00:23:16.724 "config": [] 00:23:16.724 }, 00:23:16.724 { 00:23:16.724 "subsystem": "nvmf", 00:23:16.724 "config": [ 00:23:16.724 { 00:23:16.724 "method": "nvmf_set_config", 00:23:16.724 "params": { 00:23:16.724 "discovery_filter": "match_any", 00:23:16.724 "admin_cmd_passthru": { 00:23:16.724 "identify_ctrlr": false 00:23:16.724 }, 00:23:16.724 "dhchap_digests": [ 00:23:16.724 "sha256", 00:23:16.724 "sha384", 00:23:16.724 "sha512" 00:23:16.724 ], 00:23:16.724 "dhchap_dhgroups": [ 00:23:16.724 "null", 00:23:16.724 "ffdhe2048", 00:23:16.724 "ffdhe3072", 00:23:16.724 "ffdhe4096", 00:23:16.724 "ffdhe6144", 00:23:16.724 "ffdhe8192" 00:23:16.724 ] 00:23:16.724 } 00:23:16.724 }, 00:23:16.724 { 00:23:16.724 "method": "nvmf_set_max_subsystems", 00:23:16.724 "params": { 00:23:16.724 "max_subsystems": 1024 00:23:16.724 } 00:23:16.724 }, 00:23:16.724 { 00:23:16.724 "method": "nvmf_set_crdt", 00:23:16.724 "params": { 00:23:16.724 "crdt1": 0, 00:23:16.724 "crdt2": 0, 00:23:16.724 "crdt3": 0 00:23:16.724 } 00:23:16.724 }, 00:23:16.724 { 00:23:16.724 "method": "nvmf_create_transport", 00:23:16.724 "params": { 00:23:16.724 "trtype": "TCP", 00:23:16.724 "max_queue_depth": 128, 00:23:16.724 "max_io_qpairs_per_ctrlr": 127, 00:23:16.724 "in_capsule_data_size": 4096, 00:23:16.724 "max_io_size": 131072, 00:23:16.724 "io_unit_size": 131072, 00:23:16.724 "max_aq_depth": 128, 00:23:16.724 "num_shared_buffers": 511, 00:23:16.724 "buf_cache_size": 4294967295, 00:23:16.724 "dif_insert_or_strip": false, 00:23:16.724 "zcopy": false, 00:23:16.724 "c2h_success": true, 00:23:16.724 "sock_priority": 0, 00:23:16.724 "abort_timeout_sec": 1, 00:23:16.724 "ack_timeout": 0, 00:23:16.724 "data_wr_pool_size": 0 00:23:16.724 } 00:23:16.724 } 00:23:16.724 ] 00:23:16.724 }, 00:23:16.724 { 00:23:16.724 "subsystem": "iscsi", 00:23:16.724 "config": [ 00:23:16.724 { 00:23:16.724 "method": "iscsi_set_options", 00:23:16.724 "params": { 00:23:16.724 "node_base": "iqn.2016-06.io.spdk", 00:23:16.724 "max_sessions": 128, 00:23:16.724 "max_connections_per_session": 2, 00:23:16.724 "max_queue_depth": 64, 00:23:16.724 "default_time2wait": 2, 00:23:16.724 "default_time2retain": 20, 00:23:16.724 "first_burst_length": 8192, 00:23:16.724 "immediate_data": true, 00:23:16.724 "allow_duplicated_isid": false, 00:23:16.724 "error_recovery_level": 0, 00:23:16.724 "nop_timeout": 60, 00:23:16.724 "nop_in_interval": 30, 00:23:16.724 "disable_chap": false, 00:23:16.724 "require_chap": false, 00:23:16.724 "mutual_chap": false, 00:23:16.724 "chap_group": 0, 00:23:16.724 "max_large_datain_per_connection": 64, 00:23:16.724 "max_r2t_per_connection": 4, 00:23:16.724 "pdu_pool_size": 36864, 00:23:16.724 "immediate_data_pool_size": 16384, 00:23:16.724 "data_out_pool_size": 2048 00:23:16.724 } 00:23:16.724 } 00:23:16.724 ] 00:23:16.724 } 00:23:16.724 ] 00:23:16.724 } 00:23:16.724 15:52:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:23:16.724 15:52:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 56369 00:23:16.724 15:52:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 56369 ']' 00:23:16.724 15:52:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 56369 00:23:16.724 15:52:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:23:16.724 15:52:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:16.724 15:52:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56369 00:23:16.724 15:52:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:16.724 killing process with pid 56369 00:23:16.724 15:52:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:16.724 15:52:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56369' 00:23:16.724 15:52:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 56369 00:23:16.724 15:52:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 56369 00:23:18.096 15:52:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=56403 00:23:18.096 15:52:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:23:18.096 15:52:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:23:23.426 15:52:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 56403 00:23:23.426 15:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 56403 ']' 00:23:23.426 15:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 56403 00:23:23.426 15:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:23:23.426 15:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:23.426 15:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56403 00:23:23.426 killing process with pid 56403 00:23:23.426 15:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:23.426 15:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:23.426 15:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56403' 00:23:23.426 15:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 56403 00:23:23.426 15:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 56403 00:23:24.358 15:52:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:23:24.358 15:52:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:23:24.358 00:23:24.358 real 0m8.598s 00:23:24.358 user 0m8.298s 00:23:24.358 sys 0m0.574s 00:23:24.358 15:52:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:24.358 15:52:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:23:24.358 ************************************ 00:23:24.358 END TEST skip_rpc_with_json 00:23:24.358 ************************************ 00:23:24.358 15:52:56 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:23:24.358 15:52:56 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:24.358 15:52:56 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:24.358 15:52:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:24.358 ************************************ 00:23:24.358 START TEST skip_rpc_with_delay 00:23:24.359 ************************************ 00:23:24.359 15:52:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:23:24.359 15:52:56 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:23:24.359 15:52:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:23:24.359 15:52:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:23:24.359 15:52:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:24.359 15:52:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:24.359 15:52:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:24.359 15:52:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:24.359 15:52:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:24.359 15:52:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:24.359 15:52:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:24.359 15:52:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:23:24.359 15:52:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:23:24.359 [2024-11-05 15:52:56.601730] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:23:24.359 15:52:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:23:24.359 15:52:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:24.359 15:52:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:24.359 15:52:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:24.359 00:23:24.359 real 0m0.122s 00:23:24.359 user 0m0.068s 00:23:24.359 sys 0m0.052s 00:23:24.359 15:52:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:24.359 15:52:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:23:24.359 ************************************ 00:23:24.359 END TEST skip_rpc_with_delay 00:23:24.359 ************************************ 00:23:24.359 15:52:56 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:23:24.359 15:52:56 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:23:24.359 15:52:56 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:23:24.359 15:52:56 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:24.359 15:52:56 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:24.359 15:52:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:24.359 ************************************ 00:23:24.359 START TEST exit_on_failed_rpc_init 00:23:24.359 ************************************ 00:23:24.359 15:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:23:24.359 15:52:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=56526 00:23:24.359 15:52:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 56526 00:23:24.359 15:52:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:24.359 15:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 56526 ']' 00:23:24.359 15:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.359 15:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:24.359 15:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.359 15:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:24.359 15:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:23:24.359 [2024-11-05 15:52:56.764100] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:23:24.359 [2024-11-05 15:52:56.764234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56526 ] 00:23:24.615 [2024-11-05 15:52:56.920160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.615 [2024-11-05 15:52:57.005744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.181 15:52:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:25.181 15:52:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:23:25.181 15:52:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:23:25.181 15:52:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:23:25.181 15:52:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:23:25.181 15:52:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:23:25.181 15:52:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:25.181 15:52:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:25.181 15:52:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:25.439 15:52:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:25.439 15:52:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:25.439 15:52:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:25.439 15:52:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:25.439 15:52:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:23:25.439 15:52:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:23:25.439 [2024-11-05 15:52:57.676882] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:23:25.439 [2024-11-05 15:52:57.677013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56544 ] 00:23:25.439 [2024-11-05 15:52:57.827783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.697 [2024-11-05 15:52:57.928801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.697 [2024-11-05 15:52:57.928894] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:23:25.697 [2024-11-05 15:52:57.928908] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:23:25.697 [2024-11-05 15:52:57.928918] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:25.954 15:52:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:23:25.954 15:52:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:25.954 15:52:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:23:25.954 15:52:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:23:25.954 15:52:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:23:25.954 15:52:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:25.954 15:52:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:23:25.954 15:52:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 56526 00:23:25.954 15:52:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 56526 ']' 00:23:25.954 15:52:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 56526 00:23:25.954 15:52:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:23:25.954 15:52:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:25.954 15:52:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56526 00:23:25.954 killing process with pid 56526 00:23:25.954 15:52:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:25.954 15:52:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:25.954 15:52:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56526' 00:23:25.954 15:52:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 56526 00:23:25.954 15:52:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 56526 00:23:27.328 00:23:27.328 real 0m2.673s 00:23:27.328 user 0m2.978s 00:23:27.328 sys 0m0.417s 00:23:27.328 15:52:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:27.328 15:52:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:23:27.328 ************************************ 00:23:27.328 END TEST exit_on_failed_rpc_init 00:23:27.328 ************************************ 00:23:27.328 15:52:59 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:23:27.328 00:23:27.328 real 0m17.927s 00:23:27.328 user 0m17.335s 00:23:27.328 sys 0m1.477s 00:23:27.328 15:52:59 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:27.329 15:52:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:27.329 ************************************ 00:23:27.329 END TEST skip_rpc 00:23:27.329 ************************************ 00:23:27.329 15:52:59 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:23:27.329 15:52:59 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:27.329 15:52:59 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:27.329 15:52:59 -- common/autotest_common.sh@10 -- # set +x 00:23:27.329 ************************************ 00:23:27.329 START TEST rpc_client 00:23:27.329 ************************************ 00:23:27.329 15:52:59 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:23:27.329 * Looking for test storage... 00:23:27.329 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:23:27.329 15:52:59 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:27.329 15:52:59 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:23:27.329 15:52:59 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:27.329 15:52:59 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:27.329 15:52:59 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:27.329 15:52:59 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:27.329 15:52:59 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:27.329 15:52:59 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:23:27.329 15:52:59 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:23:27.329 15:52:59 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:23:27.329 15:52:59 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:23:27.329 15:52:59 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:23:27.329 15:52:59 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:23:27.329 15:52:59 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:23:27.329 15:52:59 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:27.329 15:52:59 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:23:27.329 15:52:59 rpc_client -- scripts/common.sh@345 -- # : 1 00:23:27.329 15:52:59 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:27.329 15:52:59 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:27.329 15:52:59 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:23:27.329 15:52:59 rpc_client -- scripts/common.sh@353 -- # local d=1 00:23:27.329 15:52:59 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:27.329 15:52:59 rpc_client -- scripts/common.sh@355 -- # echo 1 00:23:27.329 15:52:59 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:23:27.329 15:52:59 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:23:27.329 15:52:59 rpc_client -- scripts/common.sh@353 -- # local d=2 00:23:27.329 15:52:59 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:27.329 15:52:59 rpc_client -- scripts/common.sh@355 -- # echo 2 00:23:27.329 15:52:59 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:23:27.329 15:52:59 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:27.329 15:52:59 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:27.329 15:52:59 rpc_client -- scripts/common.sh@368 -- # return 0 00:23:27.329 15:52:59 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:27.329 15:52:59 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:27.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.329 --rc genhtml_branch_coverage=1 00:23:27.329 --rc genhtml_function_coverage=1 00:23:27.329 --rc genhtml_legend=1 00:23:27.329 --rc geninfo_all_blocks=1 00:23:27.329 --rc geninfo_unexecuted_blocks=1 00:23:27.329 00:23:27.329 ' 00:23:27.329 15:52:59 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:27.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.329 --rc genhtml_branch_coverage=1 00:23:27.329 --rc genhtml_function_coverage=1 00:23:27.329 --rc genhtml_legend=1 00:23:27.329 --rc geninfo_all_blocks=1 00:23:27.329 --rc geninfo_unexecuted_blocks=1 00:23:27.329 00:23:27.329 ' 00:23:27.329 15:52:59 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:27.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.329 --rc genhtml_branch_coverage=1 00:23:27.329 --rc genhtml_function_coverage=1 00:23:27.329 --rc genhtml_legend=1 00:23:27.329 --rc geninfo_all_blocks=1 00:23:27.329 --rc geninfo_unexecuted_blocks=1 00:23:27.329 00:23:27.329 ' 00:23:27.329 15:52:59 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:27.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.329 --rc genhtml_branch_coverage=1 00:23:27.329 --rc genhtml_function_coverage=1 00:23:27.329 --rc genhtml_legend=1 00:23:27.329 --rc geninfo_all_blocks=1 00:23:27.330 --rc geninfo_unexecuted_blocks=1 00:23:27.330 00:23:27.330 ' 00:23:27.330 15:52:59 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:23:27.330 OK 00:23:27.330 15:52:59 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:23:27.330 00:23:27.330 real 0m0.181s 00:23:27.330 user 0m0.103s 00:23:27.330 sys 0m0.085s 00:23:27.330 15:52:59 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:27.330 15:52:59 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:23:27.330 ************************************ 00:23:27.330 END TEST rpc_client 00:23:27.330 ************************************ 00:23:27.330 15:52:59 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:23:27.330 15:52:59 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:27.330 15:52:59 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:27.330 15:52:59 -- common/autotest_common.sh@10 -- # set +x 00:23:27.330 ************************************ 00:23:27.330 START TEST json_config 00:23:27.330 ************************************ 00:23:27.330 15:52:59 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:23:27.330 15:52:59 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:27.330 15:52:59 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:23:27.330 15:52:59 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:27.330 15:52:59 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:27.330 15:52:59 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:27.330 15:52:59 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:27.330 15:52:59 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:27.330 15:52:59 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:23:27.330 15:52:59 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:23:27.330 15:52:59 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:23:27.330 15:52:59 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:23:27.330 15:52:59 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:23:27.330 15:52:59 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:23:27.330 15:52:59 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:23:27.330 15:52:59 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:27.330 15:52:59 json_config -- scripts/common.sh@344 -- # case "$op" in 00:23:27.330 15:52:59 json_config -- scripts/common.sh@345 -- # : 1 00:23:27.330 15:52:59 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:27.330 15:52:59 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:27.592 15:52:59 json_config -- scripts/common.sh@365 -- # decimal 1 00:23:27.592 15:52:59 json_config -- scripts/common.sh@353 -- # local d=1 00:23:27.592 15:52:59 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:27.592 15:52:59 json_config -- scripts/common.sh@355 -- # echo 1 00:23:27.592 15:52:59 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:23:27.592 15:52:59 json_config -- scripts/common.sh@366 -- # decimal 2 00:23:27.592 15:52:59 json_config -- scripts/common.sh@353 -- # local d=2 00:23:27.592 15:52:59 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:27.592 15:52:59 json_config -- scripts/common.sh@355 -- # echo 2 00:23:27.592 15:52:59 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:23:27.592 15:52:59 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:27.592 15:52:59 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:27.592 15:52:59 json_config -- scripts/common.sh@368 -- # return 0 00:23:27.592 15:52:59 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:27.592 15:52:59 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:27.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.592 --rc genhtml_branch_coverage=1 00:23:27.592 --rc genhtml_function_coverage=1 00:23:27.592 --rc genhtml_legend=1 00:23:27.592 --rc geninfo_all_blocks=1 00:23:27.592 --rc geninfo_unexecuted_blocks=1 00:23:27.592 00:23:27.592 ' 00:23:27.592 15:52:59 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:27.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.593 --rc genhtml_branch_coverage=1 00:23:27.593 --rc genhtml_function_coverage=1 00:23:27.593 --rc genhtml_legend=1 00:23:27.593 --rc geninfo_all_blocks=1 00:23:27.593 --rc geninfo_unexecuted_blocks=1 00:23:27.593 00:23:27.593 ' 00:23:27.593 15:52:59 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:27.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.593 --rc genhtml_branch_coverage=1 00:23:27.593 --rc genhtml_function_coverage=1 00:23:27.593 --rc genhtml_legend=1 00:23:27.593 --rc geninfo_all_blocks=1 00:23:27.593 --rc geninfo_unexecuted_blocks=1 00:23:27.593 00:23:27.593 ' 00:23:27.593 15:52:59 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:27.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.593 --rc genhtml_branch_coverage=1 00:23:27.593 --rc genhtml_function_coverage=1 00:23:27.593 --rc genhtml_legend=1 00:23:27.593 --rc geninfo_all_blocks=1 00:23:27.593 --rc geninfo_unexecuted_blocks=1 00:23:27.593 00:23:27.593 ' 00:23:27.593 15:52:59 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:27.593 15:52:59 json_config -- nvmf/common.sh@7 -- # uname -s 00:23:27.593 15:52:59 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:27.593 15:52:59 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:27.593 15:52:59 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:27.593 15:52:59 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:27.593 15:52:59 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:27.593 15:52:59 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:27.593 15:52:59 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:27.593 15:52:59 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:27.593 15:52:59 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:27.593 15:52:59 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:27.593 15:52:59 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:24555432-5c09-44b1-a72b-d75c56d455b0 00:23:27.593 15:52:59 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=24555432-5c09-44b1-a72b-d75c56d455b0 00:23:27.593 15:52:59 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:27.593 15:52:59 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:27.593 15:52:59 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:23:27.593 15:52:59 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:27.593 15:52:59 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:27.593 15:52:59 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:23:27.593 15:52:59 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:27.593 15:52:59 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:27.593 15:52:59 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:27.593 15:52:59 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.593 15:52:59 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.593 15:52:59 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.593 15:52:59 json_config -- paths/export.sh@5 -- # export PATH 00:23:27.593 15:52:59 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.593 15:52:59 json_config -- nvmf/common.sh@51 -- # : 0 00:23:27.593 15:52:59 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:27.593 15:52:59 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:27.593 15:52:59 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:27.593 15:52:59 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:27.593 15:52:59 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:27.593 15:52:59 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:27.593 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:27.593 15:52:59 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:27.593 15:52:59 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:27.593 15:52:59 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:27.593 15:52:59 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:23:27.593 15:52:59 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:23:27.593 15:52:59 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:23:27.593 15:52:59 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:23:27.593 15:52:59 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:23:27.593 WARNING: No tests are enabled so not running JSON configuration tests 00:23:27.593 15:52:59 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:23:27.593 15:52:59 json_config -- json_config/json_config.sh@28 -- # exit 0 00:23:27.593 00:23:27.593 real 0m0.133s 00:23:27.593 user 0m0.090s 00:23:27.593 sys 0m0.050s 00:23:27.593 15:52:59 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:27.593 15:52:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:23:27.593 ************************************ 00:23:27.593 END TEST json_config 00:23:27.593 ************************************ 00:23:27.593 15:52:59 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:23:27.593 15:52:59 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:27.593 15:52:59 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:27.593 15:52:59 -- common/autotest_common.sh@10 -- # set +x 00:23:27.593 ************************************ 00:23:27.593 START TEST json_config_extra_key 00:23:27.593 ************************************ 00:23:27.593 15:52:59 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:23:27.593 15:52:59 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:27.593 15:52:59 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:23:27.593 15:52:59 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:27.593 15:52:59 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:27.593 15:52:59 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:27.593 15:52:59 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:27.593 15:52:59 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:27.593 15:52:59 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:23:27.593 15:52:59 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:23:27.593 15:52:59 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:23:27.593 15:52:59 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:23:27.593 15:52:59 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:23:27.593 15:52:59 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:23:27.593 15:52:59 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:23:27.593 15:52:59 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:27.593 15:52:59 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:23:27.593 15:52:59 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:23:27.593 15:52:59 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:27.593 15:52:59 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:27.593 15:52:59 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:23:27.593 15:52:59 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:23:27.593 15:52:59 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:27.593 15:52:59 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:23:27.593 15:52:59 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:23:27.593 15:52:59 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:23:27.593 15:52:59 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:23:27.593 15:52:59 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:27.593 15:52:59 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:23:27.593 15:52:59 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:23:27.593 15:52:59 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:27.593 15:52:59 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:27.593 15:52:59 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:23:27.593 15:52:59 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:27.593 15:52:59 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:27.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.593 --rc genhtml_branch_coverage=1 00:23:27.593 --rc genhtml_function_coverage=1 00:23:27.593 --rc genhtml_legend=1 00:23:27.593 --rc geninfo_all_blocks=1 00:23:27.593 --rc geninfo_unexecuted_blocks=1 00:23:27.593 00:23:27.593 ' 00:23:27.593 15:52:59 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:27.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.593 --rc genhtml_branch_coverage=1 00:23:27.594 --rc genhtml_function_coverage=1 00:23:27.594 --rc genhtml_legend=1 00:23:27.594 --rc geninfo_all_blocks=1 00:23:27.594 --rc geninfo_unexecuted_blocks=1 00:23:27.594 00:23:27.594 ' 00:23:27.594 15:52:59 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:27.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.594 --rc genhtml_branch_coverage=1 00:23:27.594 --rc genhtml_function_coverage=1 00:23:27.594 --rc genhtml_legend=1 00:23:27.594 --rc geninfo_all_blocks=1 00:23:27.594 --rc geninfo_unexecuted_blocks=1 00:23:27.594 00:23:27.594 ' 00:23:27.594 15:52:59 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:27.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.594 --rc genhtml_branch_coverage=1 00:23:27.594 --rc genhtml_function_coverage=1 00:23:27.594 --rc genhtml_legend=1 00:23:27.594 --rc geninfo_all_blocks=1 00:23:27.594 --rc geninfo_unexecuted_blocks=1 00:23:27.594 00:23:27.594 ' 00:23:27.594 15:52:59 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:27.594 15:52:59 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:23:27.594 15:52:59 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:27.594 15:52:59 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:27.594 15:52:59 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:27.594 15:52:59 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:27.594 15:52:59 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:27.594 15:52:59 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:27.594 15:52:59 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:27.594 15:52:59 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:27.594 15:52:59 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:27.594 15:52:59 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:27.594 15:52:59 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:24555432-5c09-44b1-a72b-d75c56d455b0 00:23:27.594 15:52:59 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=24555432-5c09-44b1-a72b-d75c56d455b0 00:23:27.594 15:52:59 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:27.594 15:52:59 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:27.594 15:52:59 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:23:27.594 15:52:59 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:27.594 15:52:59 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:27.594 15:52:59 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:23:27.594 15:52:59 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:27.594 15:52:59 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:27.594 15:52:59 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:27.594 15:52:59 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.594 15:52:59 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.594 15:52:59 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.594 15:52:59 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:23:27.594 15:52:59 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.594 15:52:59 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:23:27.594 15:52:59 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:27.594 15:52:59 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:27.594 15:52:59 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:27.594 15:52:59 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:27.594 15:52:59 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:27.594 15:52:59 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:27.594 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:27.594 15:52:59 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:27.594 15:52:59 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:27.594 15:52:59 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:27.594 15:52:59 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:23:27.594 15:52:59 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:23:27.594 15:52:59 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:23:27.594 15:52:59 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:23:27.594 15:52:59 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:23:27.594 15:52:59 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:23:27.594 15:52:59 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:23:27.594 15:52:59 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:23:27.594 15:52:59 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:23:27.594 15:52:59 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:23:27.594 INFO: launching applications... 00:23:27.594 15:52:59 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:23:27.594 15:52:59 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:23:27.594 15:52:59 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:23:27.594 15:52:59 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:23:27.594 15:52:59 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:23:27.594 15:52:59 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:23:27.594 15:52:59 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:23:27.594 15:52:59 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:23:27.594 15:52:59 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:23:27.594 15:52:59 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=56732 00:23:27.594 Waiting for target to run... 00:23:27.594 15:52:59 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:23:27.594 15:52:59 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 56732 /var/tmp/spdk_tgt.sock 00:23:27.594 15:52:59 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 56732 ']' 00:23:27.594 15:52:59 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:23:27.594 15:52:59 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:27.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:23:27.594 15:52:59 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:23:27.594 15:52:59 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:27.594 15:52:59 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:23:27.594 15:52:59 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:23:27.851 [2024-11-05 15:53:00.025445] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:23:27.851 [2024-11-05 15:53:00.025576] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56732 ] 00:23:28.108 [2024-11-05 15:53:00.352590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.108 [2024-11-05 15:53:00.446984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.673 00:23:28.673 INFO: shutting down applications... 00:23:28.673 15:53:00 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:28.673 15:53:00 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:23:28.673 15:53:00 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:23:28.673 15:53:00 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:23:28.673 15:53:00 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:23:28.673 15:53:00 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:23:28.673 15:53:00 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:23:28.673 15:53:00 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 56732 ]] 00:23:28.673 15:53:00 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 56732 00:23:28.673 15:53:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:23:28.673 15:53:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:23:28.673 15:53:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56732 00:23:28.673 15:53:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:23:29.237 15:53:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:23:29.237 15:53:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:23:29.237 15:53:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56732 00:23:29.237 15:53:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:23:29.803 15:53:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:23:29.803 15:53:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:23:29.803 15:53:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56732 00:23:29.803 15:53:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:23:30.061 15:53:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:23:30.061 15:53:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:23:30.061 15:53:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56732 00:23:30.061 15:53:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:23:30.625 15:53:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:23:30.625 15:53:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:23:30.625 15:53:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56732 00:23:30.625 15:53:02 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:23:30.625 SPDK target shutdown done 00:23:30.625 Success 00:23:30.625 15:53:02 json_config_extra_key -- json_config/common.sh@43 -- # break 00:23:30.625 15:53:02 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:23:30.625 15:53:02 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:23:30.625 15:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:23:30.625 00:23:30.625 real 0m3.160s 00:23:30.625 user 0m2.772s 00:23:30.625 sys 0m0.427s 00:23:30.625 ************************************ 00:23:30.625 END TEST json_config_extra_key 00:23:30.626 ************************************ 00:23:30.626 15:53:02 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:30.626 15:53:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:23:30.626 15:53:02 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:23:30.626 15:53:02 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:30.626 15:53:02 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:30.626 15:53:02 -- common/autotest_common.sh@10 -- # set +x 00:23:30.626 ************************************ 00:23:30.626 START TEST alias_rpc 00:23:30.626 ************************************ 00:23:30.626 15:53:03 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:23:30.884 * Looking for test storage... 00:23:30.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:23:30.884 15:53:03 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:30.884 15:53:03 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:23:30.884 15:53:03 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:30.884 15:53:03 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:30.884 15:53:03 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:30.884 15:53:03 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:30.884 15:53:03 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:30.884 15:53:03 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:23:30.884 15:53:03 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:23:30.884 15:53:03 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:23:30.884 15:53:03 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:23:30.884 15:53:03 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:23:30.884 15:53:03 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:23:30.884 15:53:03 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:23:30.884 15:53:03 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:30.884 15:53:03 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:23:30.884 15:53:03 alias_rpc -- scripts/common.sh@345 -- # : 1 00:23:30.884 15:53:03 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:30.884 15:53:03 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:30.884 15:53:03 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:23:30.884 15:53:03 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:23:30.884 15:53:03 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:30.884 15:53:03 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:23:30.884 15:53:03 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:23:30.884 15:53:03 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:23:30.884 15:53:03 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:23:30.884 15:53:03 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:30.884 15:53:03 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:23:30.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.884 15:53:03 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:23:30.884 15:53:03 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:30.884 15:53:03 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:30.884 15:53:03 alias_rpc -- scripts/common.sh@368 -- # return 0 00:23:30.884 15:53:03 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:30.884 15:53:03 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:30.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.884 --rc genhtml_branch_coverage=1 00:23:30.884 --rc genhtml_function_coverage=1 00:23:30.884 --rc genhtml_legend=1 00:23:30.884 --rc geninfo_all_blocks=1 00:23:30.884 --rc geninfo_unexecuted_blocks=1 00:23:30.884 00:23:30.884 ' 00:23:30.884 15:53:03 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:30.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.884 --rc genhtml_branch_coverage=1 00:23:30.884 --rc genhtml_function_coverage=1 00:23:30.884 --rc genhtml_legend=1 00:23:30.884 --rc geninfo_all_blocks=1 00:23:30.884 --rc geninfo_unexecuted_blocks=1 00:23:30.884 00:23:30.884 ' 00:23:30.884 15:53:03 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:30.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.884 --rc genhtml_branch_coverage=1 00:23:30.884 --rc genhtml_function_coverage=1 00:23:30.884 --rc genhtml_legend=1 00:23:30.884 --rc geninfo_all_blocks=1 00:23:30.884 --rc geninfo_unexecuted_blocks=1 00:23:30.884 00:23:30.884 ' 00:23:30.884 15:53:03 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:30.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.884 --rc genhtml_branch_coverage=1 00:23:30.884 --rc genhtml_function_coverage=1 00:23:30.884 --rc genhtml_legend=1 00:23:30.884 --rc geninfo_all_blocks=1 00:23:30.884 --rc geninfo_unexecuted_blocks=1 00:23:30.884 00:23:30.884 ' 00:23:30.884 15:53:03 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:23:30.884 15:53:03 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=56830 00:23:30.884 15:53:03 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 56830 00:23:30.884 15:53:03 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:30.884 15:53:03 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 56830 ']' 00:23:30.884 15:53:03 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.884 15:53:03 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:30.884 15:53:03 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.884 15:53:03 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:30.884 15:53:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:30.884 [2024-11-05 15:53:03.204095] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:23:30.884 [2024-11-05 15:53:03.204405] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56830 ] 00:23:31.142 [2024-11-05 15:53:03.361196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.142 [2024-11-05 15:53:03.464838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.707 15:53:04 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:31.707 15:53:04 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:23:31.707 15:53:04 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:23:31.965 15:53:04 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 56830 00:23:31.965 15:53:04 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 56830 ']' 00:23:31.965 15:53:04 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 56830 00:23:31.965 15:53:04 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:23:31.965 15:53:04 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:31.965 15:53:04 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56830 00:23:32.253 killing process with pid 56830 00:23:32.253 15:53:04 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:32.253 15:53:04 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:32.253 15:53:04 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56830' 00:23:32.253 15:53:04 alias_rpc -- common/autotest_common.sh@971 -- # kill 56830 00:23:32.253 15:53:04 alias_rpc -- common/autotest_common.sh@976 -- # wait 56830 00:23:33.626 ************************************ 00:23:33.626 END TEST alias_rpc 00:23:33.626 ************************************ 00:23:33.626 00:23:33.626 real 0m2.911s 00:23:33.626 user 0m3.037s 00:23:33.626 sys 0m0.400s 00:23:33.626 15:53:05 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:33.626 15:53:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:33.626 15:53:05 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:23:33.626 15:53:05 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:23:33.626 15:53:05 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:33.626 15:53:05 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:33.626 15:53:05 -- common/autotest_common.sh@10 -- # set +x 00:23:33.626 ************************************ 00:23:33.626 START TEST spdkcli_tcp 00:23:33.626 ************************************ 00:23:33.626 15:53:05 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:23:33.626 * Looking for test storage... 00:23:33.626 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:23:33.626 15:53:06 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:33.626 15:53:06 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:33.626 15:53:06 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:23:33.884 15:53:06 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:33.884 15:53:06 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:33.884 15:53:06 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:33.884 15:53:06 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:33.884 15:53:06 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:23:33.884 15:53:06 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:23:33.884 15:53:06 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:23:33.884 15:53:06 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:23:33.884 15:53:06 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:23:33.884 15:53:06 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:23:33.884 15:53:06 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:23:33.884 15:53:06 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:33.884 15:53:06 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:23:33.884 15:53:06 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:23:33.884 15:53:06 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:33.884 15:53:06 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:33.884 15:53:06 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:23:33.884 15:53:06 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:23:33.884 15:53:06 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:33.884 15:53:06 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:23:33.884 15:53:06 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:23:33.884 15:53:06 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:23:33.884 15:53:06 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:23:33.884 15:53:06 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:33.884 15:53:06 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:23:33.884 15:53:06 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:23:33.884 15:53:06 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:33.884 15:53:06 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:33.884 15:53:06 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:23:33.884 15:53:06 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:33.884 15:53:06 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:33.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.884 --rc genhtml_branch_coverage=1 00:23:33.884 --rc genhtml_function_coverage=1 00:23:33.884 --rc genhtml_legend=1 00:23:33.884 --rc geninfo_all_blocks=1 00:23:33.884 --rc geninfo_unexecuted_blocks=1 00:23:33.885 00:23:33.885 ' 00:23:33.885 15:53:06 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:33.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.885 --rc genhtml_branch_coverage=1 00:23:33.885 --rc genhtml_function_coverage=1 00:23:33.885 --rc genhtml_legend=1 00:23:33.885 --rc geninfo_all_blocks=1 00:23:33.885 --rc geninfo_unexecuted_blocks=1 00:23:33.885 00:23:33.885 ' 00:23:33.885 15:53:06 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:33.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.885 --rc genhtml_branch_coverage=1 00:23:33.885 --rc genhtml_function_coverage=1 00:23:33.885 --rc genhtml_legend=1 00:23:33.885 --rc geninfo_all_blocks=1 00:23:33.885 --rc geninfo_unexecuted_blocks=1 00:23:33.885 00:23:33.885 ' 00:23:33.885 15:53:06 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:33.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.885 --rc genhtml_branch_coverage=1 00:23:33.885 --rc genhtml_function_coverage=1 00:23:33.885 --rc genhtml_legend=1 00:23:33.885 --rc geninfo_all_blocks=1 00:23:33.885 --rc geninfo_unexecuted_blocks=1 00:23:33.885 00:23:33.885 ' 00:23:33.885 15:53:06 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:23:33.885 15:53:06 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:23:33.885 15:53:06 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:23:33.885 15:53:06 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:23:33.885 15:53:06 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:23:33.885 15:53:06 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:33.885 15:53:06 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:23:33.885 15:53:06 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:33.885 15:53:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:33.885 15:53:06 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=56921 00:23:33.885 15:53:06 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 56921 00:23:33.885 15:53:06 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 56921 ']' 00:23:33.885 15:53:06 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.885 15:53:06 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:33.885 15:53:06 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.885 15:53:06 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:23:33.885 15:53:06 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:33.885 15:53:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:33.885 [2024-11-05 15:53:06.177433] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:23:33.885 [2024-11-05 15:53:06.177670] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56921 ] 00:23:34.142 [2024-11-05 15:53:06.341762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:34.142 [2024-11-05 15:53:06.444476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:34.142 [2024-11-05 15:53:06.444480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.707 15:53:07 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:34.707 15:53:07 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:23:34.707 15:53:07 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=56938 00:23:34.707 15:53:07 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:23:34.707 15:53:07 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:23:35.037 [ 00:23:35.037 "bdev_malloc_delete", 00:23:35.037 "bdev_malloc_create", 00:23:35.037 "bdev_null_resize", 00:23:35.037 "bdev_null_delete", 00:23:35.038 "bdev_null_create", 00:23:35.038 "bdev_nvme_cuse_unregister", 00:23:35.038 "bdev_nvme_cuse_register", 00:23:35.038 "bdev_opal_new_user", 00:23:35.038 "bdev_opal_set_lock_state", 00:23:35.038 "bdev_opal_delete", 00:23:35.038 "bdev_opal_get_info", 00:23:35.038 "bdev_opal_create", 00:23:35.038 "bdev_nvme_opal_revert", 00:23:35.038 "bdev_nvme_opal_init", 00:23:35.038 "bdev_nvme_send_cmd", 00:23:35.038 "bdev_nvme_set_keys", 00:23:35.038 "bdev_nvme_get_path_iostat", 00:23:35.038 "bdev_nvme_get_mdns_discovery_info", 00:23:35.038 "bdev_nvme_stop_mdns_discovery", 00:23:35.038 "bdev_nvme_start_mdns_discovery", 00:23:35.038 "bdev_nvme_set_multipath_policy", 00:23:35.038 "bdev_nvme_set_preferred_path", 00:23:35.038 "bdev_nvme_get_io_paths", 00:23:35.038 "bdev_nvme_remove_error_injection", 00:23:35.038 "bdev_nvme_add_error_injection", 00:23:35.038 "bdev_nvme_get_discovery_info", 00:23:35.038 "bdev_nvme_stop_discovery", 00:23:35.038 "bdev_nvme_start_discovery", 00:23:35.038 "bdev_nvme_get_controller_health_info", 00:23:35.038 "bdev_nvme_disable_controller", 00:23:35.038 "bdev_nvme_enable_controller", 00:23:35.038 "bdev_nvme_reset_controller", 00:23:35.038 "bdev_nvme_get_transport_statistics", 00:23:35.038 "bdev_nvme_apply_firmware", 00:23:35.038 "bdev_nvme_detach_controller", 00:23:35.038 "bdev_nvme_get_controllers", 00:23:35.038 "bdev_nvme_attach_controller", 00:23:35.038 "bdev_nvme_set_hotplug", 00:23:35.038 "bdev_nvme_set_options", 00:23:35.038 "bdev_passthru_delete", 00:23:35.038 "bdev_passthru_create", 00:23:35.038 "bdev_lvol_set_parent_bdev", 00:23:35.038 "bdev_lvol_set_parent", 00:23:35.038 "bdev_lvol_check_shallow_copy", 00:23:35.038 "bdev_lvol_start_shallow_copy", 00:23:35.038 "bdev_lvol_grow_lvstore", 00:23:35.038 "bdev_lvol_get_lvols", 00:23:35.038 "bdev_lvol_get_lvstores", 00:23:35.038 "bdev_lvol_delete", 00:23:35.038 "bdev_lvol_set_read_only", 00:23:35.038 "bdev_lvol_resize", 00:23:35.038 "bdev_lvol_decouple_parent", 00:23:35.038 "bdev_lvol_inflate", 00:23:35.038 "bdev_lvol_rename", 00:23:35.038 "bdev_lvol_clone_bdev", 00:23:35.038 "bdev_lvol_clone", 00:23:35.038 "bdev_lvol_snapshot", 00:23:35.038 "bdev_lvol_create", 00:23:35.038 "bdev_lvol_delete_lvstore", 00:23:35.038 "bdev_lvol_rename_lvstore", 00:23:35.038 "bdev_lvol_create_lvstore", 00:23:35.038 "bdev_raid_set_options", 00:23:35.038 "bdev_raid_remove_base_bdev", 00:23:35.038 "bdev_raid_add_base_bdev", 00:23:35.038 "bdev_raid_delete", 00:23:35.038 "bdev_raid_create", 00:23:35.038 "bdev_raid_get_bdevs", 00:23:35.038 "bdev_error_inject_error", 00:23:35.038 "bdev_error_delete", 00:23:35.038 "bdev_error_create", 00:23:35.038 "bdev_split_delete", 00:23:35.038 "bdev_split_create", 00:23:35.038 "bdev_delay_delete", 00:23:35.038 "bdev_delay_create", 00:23:35.038 "bdev_delay_update_latency", 00:23:35.038 "bdev_zone_block_delete", 00:23:35.038 "bdev_zone_block_create", 00:23:35.038 "blobfs_create", 00:23:35.038 "blobfs_detect", 00:23:35.038 "blobfs_set_cache_size", 00:23:35.038 "bdev_aio_delete", 00:23:35.038 "bdev_aio_rescan", 00:23:35.038 "bdev_aio_create", 00:23:35.038 "bdev_ftl_set_property", 00:23:35.038 "bdev_ftl_get_properties", 00:23:35.038 "bdev_ftl_get_stats", 00:23:35.038 "bdev_ftl_unmap", 00:23:35.038 "bdev_ftl_unload", 00:23:35.038 "bdev_ftl_delete", 00:23:35.038 "bdev_ftl_load", 00:23:35.038 "bdev_ftl_create", 00:23:35.038 "bdev_virtio_attach_controller", 00:23:35.038 "bdev_virtio_scsi_get_devices", 00:23:35.038 "bdev_virtio_detach_controller", 00:23:35.038 "bdev_virtio_blk_set_hotplug", 00:23:35.038 "bdev_iscsi_delete", 00:23:35.038 "bdev_iscsi_create", 00:23:35.038 "bdev_iscsi_set_options", 00:23:35.038 "accel_error_inject_error", 00:23:35.038 "ioat_scan_accel_module", 00:23:35.038 "dsa_scan_accel_module", 00:23:35.038 "iaa_scan_accel_module", 00:23:35.038 "keyring_file_remove_key", 00:23:35.038 "keyring_file_add_key", 00:23:35.038 "keyring_linux_set_options", 00:23:35.038 "fsdev_aio_delete", 00:23:35.038 "fsdev_aio_create", 00:23:35.038 "iscsi_get_histogram", 00:23:35.038 "iscsi_enable_histogram", 00:23:35.038 "iscsi_set_options", 00:23:35.038 "iscsi_get_auth_groups", 00:23:35.038 "iscsi_auth_group_remove_secret", 00:23:35.038 "iscsi_auth_group_add_secret", 00:23:35.038 "iscsi_delete_auth_group", 00:23:35.038 "iscsi_create_auth_group", 00:23:35.038 "iscsi_set_discovery_auth", 00:23:35.038 "iscsi_get_options", 00:23:35.038 "iscsi_target_node_request_logout", 00:23:35.038 "iscsi_target_node_set_redirect", 00:23:35.038 "iscsi_target_node_set_auth", 00:23:35.038 "iscsi_target_node_add_lun", 00:23:35.038 "iscsi_get_stats", 00:23:35.038 "iscsi_get_connections", 00:23:35.038 "iscsi_portal_group_set_auth", 00:23:35.038 "iscsi_start_portal_group", 00:23:35.038 "iscsi_delete_portal_group", 00:23:35.038 "iscsi_create_portal_group", 00:23:35.038 "iscsi_get_portal_groups", 00:23:35.038 "iscsi_delete_target_node", 00:23:35.038 "iscsi_target_node_remove_pg_ig_maps", 00:23:35.038 "iscsi_target_node_add_pg_ig_maps", 00:23:35.038 "iscsi_create_target_node", 00:23:35.038 "iscsi_get_target_nodes", 00:23:35.038 "iscsi_delete_initiator_group", 00:23:35.038 "iscsi_initiator_group_remove_initiators", 00:23:35.038 "iscsi_initiator_group_add_initiators", 00:23:35.038 "iscsi_create_initiator_group", 00:23:35.038 "iscsi_get_initiator_groups", 00:23:35.038 "nvmf_set_crdt", 00:23:35.038 "nvmf_set_config", 00:23:35.038 "nvmf_set_max_subsystems", 00:23:35.038 "nvmf_stop_mdns_prr", 00:23:35.038 "nvmf_publish_mdns_prr", 00:23:35.038 "nvmf_subsystem_get_listeners", 00:23:35.038 "nvmf_subsystem_get_qpairs", 00:23:35.038 "nvmf_subsystem_get_controllers", 00:23:35.038 "nvmf_get_stats", 00:23:35.038 "nvmf_get_transports", 00:23:35.038 "nvmf_create_transport", 00:23:35.038 "nvmf_get_targets", 00:23:35.038 "nvmf_delete_target", 00:23:35.038 "nvmf_create_target", 00:23:35.038 "nvmf_subsystem_allow_any_host", 00:23:35.038 "nvmf_subsystem_set_keys", 00:23:35.038 "nvmf_subsystem_remove_host", 00:23:35.038 "nvmf_subsystem_add_host", 00:23:35.038 "nvmf_ns_remove_host", 00:23:35.038 "nvmf_ns_add_host", 00:23:35.038 "nvmf_subsystem_remove_ns", 00:23:35.038 "nvmf_subsystem_set_ns_ana_group", 00:23:35.038 "nvmf_subsystem_add_ns", 00:23:35.038 "nvmf_subsystem_listener_set_ana_state", 00:23:35.038 "nvmf_discovery_get_referrals", 00:23:35.038 "nvmf_discovery_remove_referral", 00:23:35.038 "nvmf_discovery_add_referral", 00:23:35.038 "nvmf_subsystem_remove_listener", 00:23:35.038 "nvmf_subsystem_add_listener", 00:23:35.038 "nvmf_delete_subsystem", 00:23:35.038 "nvmf_create_subsystem", 00:23:35.038 "nvmf_get_subsystems", 00:23:35.038 "env_dpdk_get_mem_stats", 00:23:35.038 "nbd_get_disks", 00:23:35.038 "nbd_stop_disk", 00:23:35.038 "nbd_start_disk", 00:23:35.038 "ublk_recover_disk", 00:23:35.038 "ublk_get_disks", 00:23:35.038 "ublk_stop_disk", 00:23:35.038 "ublk_start_disk", 00:23:35.038 "ublk_destroy_target", 00:23:35.038 "ublk_create_target", 00:23:35.038 "virtio_blk_create_transport", 00:23:35.038 "virtio_blk_get_transports", 00:23:35.038 "vhost_controller_set_coalescing", 00:23:35.038 "vhost_get_controllers", 00:23:35.038 "vhost_delete_controller", 00:23:35.038 "vhost_create_blk_controller", 00:23:35.038 "vhost_scsi_controller_remove_target", 00:23:35.038 "vhost_scsi_controller_add_target", 00:23:35.038 "vhost_start_scsi_controller", 00:23:35.038 "vhost_create_scsi_controller", 00:23:35.038 "thread_set_cpumask", 00:23:35.038 "scheduler_set_options", 00:23:35.038 "framework_get_governor", 00:23:35.038 "framework_get_scheduler", 00:23:35.039 "framework_set_scheduler", 00:23:35.039 "framework_get_reactors", 00:23:35.039 "thread_get_io_channels", 00:23:35.039 "thread_get_pollers", 00:23:35.039 "thread_get_stats", 00:23:35.039 "framework_monitor_context_switch", 00:23:35.039 "spdk_kill_instance", 00:23:35.039 "log_enable_timestamps", 00:23:35.039 "log_get_flags", 00:23:35.039 "log_clear_flag", 00:23:35.039 "log_set_flag", 00:23:35.039 "log_get_level", 00:23:35.039 "log_set_level", 00:23:35.039 "log_get_print_level", 00:23:35.039 "log_set_print_level", 00:23:35.039 "framework_enable_cpumask_locks", 00:23:35.039 "framework_disable_cpumask_locks", 00:23:35.039 "framework_wait_init", 00:23:35.039 "framework_start_init", 00:23:35.039 "scsi_get_devices", 00:23:35.039 "bdev_get_histogram", 00:23:35.039 "bdev_enable_histogram", 00:23:35.039 "bdev_set_qos_limit", 00:23:35.039 "bdev_set_qd_sampling_period", 00:23:35.039 "bdev_get_bdevs", 00:23:35.039 "bdev_reset_iostat", 00:23:35.039 "bdev_get_iostat", 00:23:35.039 "bdev_examine", 00:23:35.039 "bdev_wait_for_examine", 00:23:35.039 "bdev_set_options", 00:23:35.039 "accel_get_stats", 00:23:35.039 "accel_set_options", 00:23:35.039 "accel_set_driver", 00:23:35.039 "accel_crypto_key_destroy", 00:23:35.039 "accel_crypto_keys_get", 00:23:35.039 "accel_crypto_key_create", 00:23:35.039 "accel_assign_opc", 00:23:35.039 "accel_get_module_info", 00:23:35.039 "accel_get_opc_assignments", 00:23:35.039 "vmd_rescan", 00:23:35.039 "vmd_remove_device", 00:23:35.039 "vmd_enable", 00:23:35.039 "sock_get_default_impl", 00:23:35.039 "sock_set_default_impl", 00:23:35.039 "sock_impl_set_options", 00:23:35.039 "sock_impl_get_options", 00:23:35.039 "iobuf_get_stats", 00:23:35.039 "iobuf_set_options", 00:23:35.039 "keyring_get_keys", 00:23:35.039 "framework_get_pci_devices", 00:23:35.039 "framework_get_config", 00:23:35.039 "framework_get_subsystems", 00:23:35.039 "fsdev_set_opts", 00:23:35.039 "fsdev_get_opts", 00:23:35.039 "trace_get_info", 00:23:35.039 "trace_get_tpoint_group_mask", 00:23:35.039 "trace_disable_tpoint_group", 00:23:35.039 "trace_enable_tpoint_group", 00:23:35.039 "trace_clear_tpoint_mask", 00:23:35.039 "trace_set_tpoint_mask", 00:23:35.039 "notify_get_notifications", 00:23:35.039 "notify_get_types", 00:23:35.039 "spdk_get_version", 00:23:35.039 "rpc_get_methods" 00:23:35.039 ] 00:23:35.039 15:53:07 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:23:35.039 15:53:07 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:35.039 15:53:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:35.039 15:53:07 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:35.039 15:53:07 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 56921 00:23:35.039 15:53:07 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 56921 ']' 00:23:35.039 15:53:07 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 56921 00:23:35.039 15:53:07 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:23:35.039 15:53:07 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:35.039 15:53:07 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56921 00:23:35.039 15:53:07 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:35.039 15:53:07 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:35.039 15:53:07 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56921' 00:23:35.039 killing process with pid 56921 00:23:35.039 15:53:07 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 56921 00:23:35.039 15:53:07 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 56921 00:23:36.941 00:23:36.941 real 0m2.914s 00:23:36.941 user 0m5.260s 00:23:36.941 sys 0m0.452s 00:23:36.941 15:53:08 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:36.941 ************************************ 00:23:36.941 END TEST spdkcli_tcp 00:23:36.941 ************************************ 00:23:36.941 15:53:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:36.941 15:53:08 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:23:36.941 15:53:08 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:36.941 15:53:08 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:36.941 15:53:08 -- common/autotest_common.sh@10 -- # set +x 00:23:36.941 ************************************ 00:23:36.941 START TEST dpdk_mem_utility 00:23:36.941 ************************************ 00:23:36.941 15:53:08 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:23:36.941 * Looking for test storage... 00:23:36.941 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:23:36.941 15:53:08 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:36.941 15:53:08 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:36.941 15:53:08 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:23:36.941 15:53:09 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:36.941 15:53:09 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:36.941 15:53:09 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:36.941 15:53:09 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:36.941 15:53:09 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:23:36.941 15:53:09 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:23:36.941 15:53:09 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:23:36.941 15:53:09 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:23:36.941 15:53:09 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:23:36.941 15:53:09 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:23:36.941 15:53:09 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:23:36.941 15:53:09 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:36.941 15:53:09 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:23:36.941 15:53:09 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:23:36.941 15:53:09 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:36.941 15:53:09 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:36.941 15:53:09 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:23:36.941 15:53:09 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:23:36.941 15:53:09 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:36.941 15:53:09 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:23:36.941 15:53:09 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:23:36.941 15:53:09 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:23:36.941 15:53:09 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:23:36.941 15:53:09 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:36.941 15:53:09 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:23:36.941 15:53:09 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:23:36.941 15:53:09 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:36.941 15:53:09 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:36.941 15:53:09 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:23:36.941 15:53:09 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:36.941 15:53:09 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:36.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.941 --rc genhtml_branch_coverage=1 00:23:36.941 --rc genhtml_function_coverage=1 00:23:36.941 --rc genhtml_legend=1 00:23:36.941 --rc geninfo_all_blocks=1 00:23:36.941 --rc geninfo_unexecuted_blocks=1 00:23:36.941 00:23:36.941 ' 00:23:36.941 15:53:09 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:36.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.941 --rc genhtml_branch_coverage=1 00:23:36.941 --rc genhtml_function_coverage=1 00:23:36.941 --rc genhtml_legend=1 00:23:36.941 --rc geninfo_all_blocks=1 00:23:36.941 --rc geninfo_unexecuted_blocks=1 00:23:36.941 00:23:36.941 ' 00:23:36.941 15:53:09 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:36.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.941 --rc genhtml_branch_coverage=1 00:23:36.941 --rc genhtml_function_coverage=1 00:23:36.941 --rc genhtml_legend=1 00:23:36.941 --rc geninfo_all_blocks=1 00:23:36.941 --rc geninfo_unexecuted_blocks=1 00:23:36.941 00:23:36.941 ' 00:23:36.941 15:53:09 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:36.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.941 --rc genhtml_branch_coverage=1 00:23:36.941 --rc genhtml_function_coverage=1 00:23:36.941 --rc genhtml_legend=1 00:23:36.941 --rc geninfo_all_blocks=1 00:23:36.941 --rc geninfo_unexecuted_blocks=1 00:23:36.941 00:23:36.941 ' 00:23:36.941 15:53:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:23:36.941 15:53:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57032 00:23:36.941 15:53:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57032 00:23:36.941 15:53:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:36.941 15:53:09 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 57032 ']' 00:23:36.941 15:53:09 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.941 15:53:09 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:36.941 15:53:09 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.941 15:53:09 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:36.941 15:53:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:23:36.941 [2024-11-05 15:53:09.124225] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:23:36.941 [2024-11-05 15:53:09.124485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57032 ] 00:23:36.941 [2024-11-05 15:53:09.279667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.199 [2024-11-05 15:53:09.394344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.763 15:53:09 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:37.763 15:53:09 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:23:37.763 15:53:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:23:37.763 15:53:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:23:37.763 15:53:09 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.763 15:53:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:23:37.763 { 00:23:37.763 "filename": "/tmp/spdk_mem_dump.txt" 00:23:37.763 } 00:23:37.763 15:53:09 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.763 15:53:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:23:37.763 DPDK memory size 816.000000 MiB in 1 heap(s) 00:23:37.763 1 heaps totaling size 816.000000 MiB 00:23:37.763 size: 816.000000 MiB heap id: 0 00:23:37.763 end heaps---------- 00:23:37.763 9 mempools totaling size 595.772034 MiB 00:23:37.763 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:23:37.763 size: 158.602051 MiB name: PDU_data_out_Pool 00:23:37.764 size: 92.545471 MiB name: bdev_io_57032 00:23:37.764 size: 50.003479 MiB name: msgpool_57032 00:23:37.764 size: 36.509338 MiB name: fsdev_io_57032 00:23:37.764 size: 21.763794 MiB name: PDU_Pool 00:23:37.764 size: 19.513306 MiB name: SCSI_TASK_Pool 00:23:37.764 size: 4.133484 MiB name: evtpool_57032 00:23:37.764 size: 0.026123 MiB name: Session_Pool 00:23:37.764 end mempools------- 00:23:37.764 6 memzones totaling size 4.142822 MiB 00:23:37.764 size: 1.000366 MiB name: RG_ring_0_57032 00:23:37.764 size: 1.000366 MiB name: RG_ring_1_57032 00:23:37.764 size: 1.000366 MiB name: RG_ring_4_57032 00:23:37.764 size: 1.000366 MiB name: RG_ring_5_57032 00:23:37.764 size: 0.125366 MiB name: RG_ring_2_57032 00:23:37.764 size: 0.015991 MiB name: RG_ring_3_57032 00:23:37.764 end memzones------- 00:23:37.764 15:53:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:23:37.764 heap id: 0 total size: 816.000000 MiB number of busy elements: 325 number of free elements: 18 00:23:37.764 list of free elements. size: 16.788940 MiB 00:23:37.764 element at address: 0x200006400000 with size: 1.995972 MiB 00:23:37.764 element at address: 0x20000a600000 with size: 1.995972 MiB 00:23:37.764 element at address: 0x200003e00000 with size: 1.991028 MiB 00:23:37.764 element at address: 0x200018d00040 with size: 0.999939 MiB 00:23:37.764 element at address: 0x200019100040 with size: 0.999939 MiB 00:23:37.764 element at address: 0x200019200000 with size: 0.999084 MiB 00:23:37.764 element at address: 0x200031e00000 with size: 0.994324 MiB 00:23:37.764 element at address: 0x200000400000 with size: 0.992004 MiB 00:23:37.764 element at address: 0x200018a00000 with size: 0.959656 MiB 00:23:37.764 element at address: 0x200019500040 with size: 0.936401 MiB 00:23:37.764 element at address: 0x200000200000 with size: 0.716980 MiB 00:23:37.764 element at address: 0x20001ac00000 with size: 0.559509 MiB 00:23:37.764 element at address: 0x200000c00000 with size: 0.490173 MiB 00:23:37.764 element at address: 0x200018e00000 with size: 0.487976 MiB 00:23:37.764 element at address: 0x200019600000 with size: 0.485413 MiB 00:23:37.764 element at address: 0x200012c00000 with size: 0.443237 MiB 00:23:37.764 element at address: 0x200028000000 with size: 0.390442 MiB 00:23:37.764 element at address: 0x200000800000 with size: 0.350891 MiB 00:23:37.764 list of standard malloc elements. size: 199.290161 MiB 00:23:37.764 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:23:37.764 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:23:37.764 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:23:37.764 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:23:37.764 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:23:37.764 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:23:37.764 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:23:37.764 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:23:37.764 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:23:37.764 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:23:37.764 element at address: 0x200012bff040 with size: 0.000305 MiB 00:23:37.764 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:23:37.764 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:23:37.764 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:23:37.764 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:23:37.764 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:23:37.764 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:23:37.764 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:23:37.764 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:23:37.764 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:23:37.764 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:23:37.764 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:23:37.764 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:23:37.764 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:23:37.764 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:23:37.764 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:23:37.764 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:23:37.764 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:23:37.764 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:23:37.764 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:23:37.764 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:23:37.764 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:23:37.764 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:23:37.764 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:23:37.764 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:23:37.764 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:23:37.764 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:23:37.764 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:23:37.764 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:23:37.764 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:23:37.764 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:23:37.764 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:23:37.764 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:23:37.764 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200000cff000 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200012bff180 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200012bff280 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200012bff380 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200012bff480 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200012bff580 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200012bff680 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200012bff780 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200012bff880 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200012bff980 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200012c71780 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200012c71880 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200012c71980 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200012c72080 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200012c72180 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:23:37.765 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:23:37.765 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac8f3c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac8f4c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac8f5c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac8f6c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:23:37.765 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:23:37.766 element at address: 0x200028063f40 with size: 0.000244 MiB 00:23:37.766 element at address: 0x200028064040 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806af80 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806b080 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806b180 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806b280 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806b380 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806b480 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806b580 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806b680 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806b780 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806b880 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806b980 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806be80 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806c080 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806c180 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806c280 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806c380 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806c480 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806c580 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806c680 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806c780 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806c880 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806c980 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806d080 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806d180 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806d280 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806d380 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806d480 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806d580 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806d680 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806d780 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806d880 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806d980 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806da80 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806db80 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806de80 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806df80 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806e080 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806e180 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806e280 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806e380 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806e480 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806e580 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806e680 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806e780 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806e880 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806e980 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806f080 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806f180 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806f280 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806f380 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806f480 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806f580 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806f680 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806f780 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806f880 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806f980 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:23:37.766 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:23:37.766 list of memzone associated elements. size: 599.920898 MiB 00:23:37.766 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:23:37.766 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:23:37.766 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:23:37.766 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:23:37.766 element at address: 0x200012df4740 with size: 92.045105 MiB 00:23:37.767 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57032_0 00:23:37.767 element at address: 0x200000dff340 with size: 48.003113 MiB 00:23:37.767 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57032_0 00:23:37.767 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:23:37.767 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57032_0 00:23:37.767 element at address: 0x2000197be900 with size: 20.255615 MiB 00:23:37.767 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:23:37.767 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:23:37.767 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:23:37.767 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:23:37.767 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57032_0 00:23:37.767 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:23:37.767 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57032 00:23:37.767 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:23:37.767 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57032 00:23:37.767 element at address: 0x200018efde00 with size: 1.008179 MiB 00:23:37.767 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:23:37.767 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:23:37.767 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:23:37.767 element at address: 0x200018afde00 with size: 1.008179 MiB 00:23:37.767 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:23:37.767 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:23:37.767 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:23:37.767 element at address: 0x200000cff100 with size: 1.000549 MiB 00:23:37.767 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57032 00:23:37.767 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:23:37.767 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57032 00:23:37.767 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:23:37.767 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57032 00:23:37.767 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:23:37.767 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57032 00:23:37.767 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:23:37.767 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57032 00:23:37.767 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:23:37.767 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57032 00:23:37.767 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:23:37.767 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:23:37.767 element at address: 0x200012c72280 with size: 0.500549 MiB 00:23:37.767 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:23:37.767 element at address: 0x20001967c440 with size: 0.250549 MiB 00:23:37.767 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:23:37.767 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:23:37.767 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57032 00:23:37.767 element at address: 0x20000085df80 with size: 0.125549 MiB 00:23:37.767 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57032 00:23:37.767 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:23:37.767 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:23:37.767 element at address: 0x200028064140 with size: 0.023804 MiB 00:23:37.767 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:23:37.767 element at address: 0x200000859d40 with size: 0.016174 MiB 00:23:37.767 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57032 00:23:37.767 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:23:37.767 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:23:37.767 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:23:37.767 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57032 00:23:37.767 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:23:37.767 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57032 00:23:37.767 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:23:37.767 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57032 00:23:37.767 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:23:37.767 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:23:37.767 15:53:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:23:37.767 15:53:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57032 00:23:37.767 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 57032 ']' 00:23:37.767 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 57032 00:23:37.767 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:23:37.767 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:37.767 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57032 00:23:37.767 killing process with pid 57032 00:23:37.767 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:37.767 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:37.767 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57032' 00:23:37.767 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 57032 00:23:37.767 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 57032 00:23:39.667 00:23:39.667 real 0m2.746s 00:23:39.667 user 0m2.719s 00:23:39.667 sys 0m0.399s 00:23:39.667 15:53:11 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:39.667 15:53:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:23:39.667 ************************************ 00:23:39.667 END TEST dpdk_mem_utility 00:23:39.667 ************************************ 00:23:39.667 15:53:11 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:23:39.667 15:53:11 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:39.667 15:53:11 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:39.667 15:53:11 -- common/autotest_common.sh@10 -- # set +x 00:23:39.667 ************************************ 00:23:39.667 START TEST event 00:23:39.667 ************************************ 00:23:39.667 15:53:11 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:23:39.667 * Looking for test storage... 00:23:39.667 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:23:39.667 15:53:11 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:39.667 15:53:11 event -- common/autotest_common.sh@1691 -- # lcov --version 00:23:39.667 15:53:11 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:39.667 15:53:11 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:39.667 15:53:11 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:39.667 15:53:11 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:39.667 15:53:11 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:39.667 15:53:11 event -- scripts/common.sh@336 -- # IFS=.-: 00:23:39.667 15:53:11 event -- scripts/common.sh@336 -- # read -ra ver1 00:23:39.667 15:53:11 event -- scripts/common.sh@337 -- # IFS=.-: 00:23:39.667 15:53:11 event -- scripts/common.sh@337 -- # read -ra ver2 00:23:39.667 15:53:11 event -- scripts/common.sh@338 -- # local 'op=<' 00:23:39.667 15:53:11 event -- scripts/common.sh@340 -- # ver1_l=2 00:23:39.667 15:53:11 event -- scripts/common.sh@341 -- # ver2_l=1 00:23:39.667 15:53:11 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:39.667 15:53:11 event -- scripts/common.sh@344 -- # case "$op" in 00:23:39.667 15:53:11 event -- scripts/common.sh@345 -- # : 1 00:23:39.667 15:53:11 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:39.667 15:53:11 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:39.667 15:53:11 event -- scripts/common.sh@365 -- # decimal 1 00:23:39.667 15:53:11 event -- scripts/common.sh@353 -- # local d=1 00:23:39.667 15:53:11 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:39.667 15:53:11 event -- scripts/common.sh@355 -- # echo 1 00:23:39.667 15:53:11 event -- scripts/common.sh@365 -- # ver1[v]=1 00:23:39.667 15:53:11 event -- scripts/common.sh@366 -- # decimal 2 00:23:39.667 15:53:11 event -- scripts/common.sh@353 -- # local d=2 00:23:39.667 15:53:11 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:39.667 15:53:11 event -- scripts/common.sh@355 -- # echo 2 00:23:39.667 15:53:11 event -- scripts/common.sh@366 -- # ver2[v]=2 00:23:39.667 15:53:11 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:39.667 15:53:11 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:39.667 15:53:11 event -- scripts/common.sh@368 -- # return 0 00:23:39.667 15:53:11 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:39.667 15:53:11 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:39.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.667 --rc genhtml_branch_coverage=1 00:23:39.667 --rc genhtml_function_coverage=1 00:23:39.667 --rc genhtml_legend=1 00:23:39.667 --rc geninfo_all_blocks=1 00:23:39.667 --rc geninfo_unexecuted_blocks=1 00:23:39.667 00:23:39.667 ' 00:23:39.667 15:53:11 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:39.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.667 --rc genhtml_branch_coverage=1 00:23:39.667 --rc genhtml_function_coverage=1 00:23:39.667 --rc genhtml_legend=1 00:23:39.667 --rc geninfo_all_blocks=1 00:23:39.667 --rc geninfo_unexecuted_blocks=1 00:23:39.667 00:23:39.667 ' 00:23:39.667 15:53:11 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:39.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.667 --rc genhtml_branch_coverage=1 00:23:39.667 --rc genhtml_function_coverage=1 00:23:39.667 --rc genhtml_legend=1 00:23:39.667 --rc geninfo_all_blocks=1 00:23:39.667 --rc geninfo_unexecuted_blocks=1 00:23:39.667 00:23:39.667 ' 00:23:39.667 15:53:11 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:39.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.667 --rc genhtml_branch_coverage=1 00:23:39.667 --rc genhtml_function_coverage=1 00:23:39.667 --rc genhtml_legend=1 00:23:39.667 --rc geninfo_all_blocks=1 00:23:39.667 --rc geninfo_unexecuted_blocks=1 00:23:39.667 00:23:39.667 ' 00:23:39.668 15:53:11 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:23:39.668 15:53:11 event -- bdev/nbd_common.sh@6 -- # set -e 00:23:39.668 15:53:11 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:23:39.668 15:53:11 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:23:39.668 15:53:11 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:39.668 15:53:11 event -- common/autotest_common.sh@10 -- # set +x 00:23:39.668 ************************************ 00:23:39.668 START TEST event_perf 00:23:39.668 ************************************ 00:23:39.668 15:53:11 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:23:39.668 Running I/O for 1 seconds...[2024-11-05 15:53:11.867352] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:23:39.668 [2024-11-05 15:53:11.867560] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57129 ] 00:23:39.668 [2024-11-05 15:53:12.028995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:39.926 [2024-11-05 15:53:12.135541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.926 [2024-11-05 15:53:12.136177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.926 [2024-11-05 15:53:12.136226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.926 Running I/O for 1 seconds...[2024-11-05 15:53:12.136251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:40.864 00:23:40.864 lcore 0: 194833 00:23:40.864 lcore 1: 194835 00:23:40.864 lcore 2: 194832 00:23:40.864 lcore 3: 194834 00:23:41.175 done. 00:23:41.175 00:23:41.175 real 0m1.472s 00:23:41.175 ************************************ 00:23:41.175 END TEST event_perf 00:23:41.175 ************************************ 00:23:41.175 user 0m4.252s 00:23:41.175 sys 0m0.096s 00:23:41.175 15:53:13 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:41.175 15:53:13 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:23:41.175 15:53:13 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:23:41.175 15:53:13 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:23:41.175 15:53:13 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:41.175 15:53:13 event -- common/autotest_common.sh@10 -- # set +x 00:23:41.175 ************************************ 00:23:41.175 START TEST event_reactor 00:23:41.175 ************************************ 00:23:41.175 15:53:13 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:23:41.175 [2024-11-05 15:53:13.382209] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:23:41.175 [2024-11-05 15:53:13.382318] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57163 ] 00:23:41.175 [2024-11-05 15:53:13.543685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.438 [2024-11-05 15:53:13.643257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.813 test_start 00:23:42.813 oneshot 00:23:42.813 tick 100 00:23:42.813 tick 100 00:23:42.813 tick 250 00:23:42.813 tick 100 00:23:42.813 tick 100 00:23:42.813 tick 100 00:23:42.813 tick 250 00:23:42.813 tick 500 00:23:42.813 tick 100 00:23:42.813 tick 100 00:23:42.813 tick 250 00:23:42.813 tick 100 00:23:42.813 tick 100 00:23:42.813 test_end 00:23:42.813 00:23:42.813 real 0m1.443s 00:23:42.813 user 0m1.277s 00:23:42.813 sys 0m0.059s 00:23:42.813 15:53:14 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:42.813 15:53:14 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:23:42.813 ************************************ 00:23:42.813 END TEST event_reactor 00:23:42.813 ************************************ 00:23:42.813 15:53:14 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:23:42.813 15:53:14 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:23:42.813 15:53:14 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:42.813 15:53:14 event -- common/autotest_common.sh@10 -- # set +x 00:23:42.813 ************************************ 00:23:42.813 START TEST event_reactor_perf 00:23:42.813 ************************************ 00:23:42.813 15:53:14 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:23:42.813 [2024-11-05 15:53:14.866899] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:23:42.813 [2024-11-05 15:53:14.867199] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57199 ] 00:23:42.813 [2024-11-05 15:53:15.025274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.813 [2024-11-05 15:53:15.176024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.185 test_start 00:23:44.185 test_end 00:23:44.185 Performance: 315059 events per second 00:23:44.185 00:23:44.185 real 0m1.491s 00:23:44.185 user 0m1.325s 00:23:44.185 sys 0m0.057s 00:23:44.185 15:53:16 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:44.185 ************************************ 00:23:44.185 END TEST event_reactor_perf 00:23:44.185 ************************************ 00:23:44.185 15:53:16 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:23:44.185 15:53:16 event -- event/event.sh@49 -- # uname -s 00:23:44.185 15:53:16 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:23:44.185 15:53:16 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:23:44.185 15:53:16 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:44.185 15:53:16 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:44.185 15:53:16 event -- common/autotest_common.sh@10 -- # set +x 00:23:44.185 ************************************ 00:23:44.185 START TEST event_scheduler 00:23:44.185 ************************************ 00:23:44.185 15:53:16 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:23:44.185 * Looking for test storage... 00:23:44.185 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:23:44.185 15:53:16 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:44.185 15:53:16 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:23:44.185 15:53:16 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:44.185 15:53:16 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:44.185 15:53:16 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:44.185 15:53:16 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:44.185 15:53:16 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:44.185 15:53:16 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:23:44.185 15:53:16 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:23:44.185 15:53:16 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:23:44.185 15:53:16 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:23:44.185 15:53:16 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:23:44.185 15:53:16 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:23:44.185 15:53:16 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:23:44.185 15:53:16 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:44.185 15:53:16 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:23:44.185 15:53:16 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:23:44.185 15:53:16 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:44.185 15:53:16 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:44.185 15:53:16 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:23:44.185 15:53:16 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:23:44.185 15:53:16 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:44.185 15:53:16 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:23:44.185 15:53:16 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:23:44.185 15:53:16 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:23:44.185 15:53:16 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:23:44.185 15:53:16 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:44.185 15:53:16 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:23:44.185 15:53:16 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:23:44.185 15:53:16 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:44.185 15:53:16 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:44.185 15:53:16 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:23:44.185 15:53:16 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:44.185 15:53:16 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:44.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.185 --rc genhtml_branch_coverage=1 00:23:44.185 --rc genhtml_function_coverage=1 00:23:44.185 --rc genhtml_legend=1 00:23:44.185 --rc geninfo_all_blocks=1 00:23:44.185 --rc geninfo_unexecuted_blocks=1 00:23:44.185 00:23:44.185 ' 00:23:44.185 15:53:16 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:44.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.185 --rc genhtml_branch_coverage=1 00:23:44.185 --rc genhtml_function_coverage=1 00:23:44.185 --rc genhtml_legend=1 00:23:44.185 --rc geninfo_all_blocks=1 00:23:44.185 --rc geninfo_unexecuted_blocks=1 00:23:44.185 00:23:44.185 ' 00:23:44.185 15:53:16 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:44.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.185 --rc genhtml_branch_coverage=1 00:23:44.185 --rc genhtml_function_coverage=1 00:23:44.185 --rc genhtml_legend=1 00:23:44.185 --rc geninfo_all_blocks=1 00:23:44.185 --rc geninfo_unexecuted_blocks=1 00:23:44.185 00:23:44.185 ' 00:23:44.185 15:53:16 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:44.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.185 --rc genhtml_branch_coverage=1 00:23:44.185 --rc genhtml_function_coverage=1 00:23:44.185 --rc genhtml_legend=1 00:23:44.185 --rc geninfo_all_blocks=1 00:23:44.185 --rc geninfo_unexecuted_blocks=1 00:23:44.185 00:23:44.185 ' 00:23:44.185 15:53:16 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:23:44.185 15:53:16 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=57275 00:23:44.185 15:53:16 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:23:44.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.185 15:53:16 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 57275 00:23:44.185 15:53:16 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 57275 ']' 00:23:44.185 15:53:16 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.185 15:53:16 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:23:44.185 15:53:16 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:44.185 15:53:16 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.186 15:53:16 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:44.186 15:53:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:23:44.186 [2024-11-05 15:53:16.579045] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:23:44.186 [2024-11-05 15:53:16.579172] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57275 ] 00:23:44.444 [2024-11-05 15:53:16.733267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:44.444 [2024-11-05 15:53:16.839395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.444 [2024-11-05 15:53:16.839530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:44.444 [2024-11-05 15:53:16.839898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:44.444 [2024-11-05 15:53:16.839796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:45.009 15:53:17 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:45.009 15:53:17 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:23:45.009 15:53:17 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:23:45.009 15:53:17 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.009 15:53:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:23:45.009 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:23:45.009 POWER: Cannot set governor of lcore 0 to userspace 00:23:45.009 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:23:45.009 POWER: Cannot set governor of lcore 0 to performance 00:23:45.009 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:23:45.009 POWER: Cannot set governor of lcore 0 to userspace 00:23:45.009 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:23:45.009 POWER: Cannot set governor of lcore 0 to userspace 00:23:45.009 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:23:45.009 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:23:45.009 POWER: Unable to set Power Management Environment for lcore 0 00:23:45.009 [2024-11-05 15:53:17.421624] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:23:45.009 [2024-11-05 15:53:17.421660] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:23:45.009 [2024-11-05 15:53:17.421681] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:23:45.009 [2024-11-05 15:53:17.421712] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:23:45.009 [2024-11-05 15:53:17.421822] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:23:45.009 [2024-11-05 15:53:17.421894] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:23:45.267 15:53:17 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.267 15:53:17 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:23:45.267 15:53:17 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.267 15:53:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:23:45.267 [2024-11-05 15:53:17.642547] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:23:45.267 15:53:17 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.267 15:53:17 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:23:45.267 15:53:17 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:45.267 15:53:17 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:45.267 15:53:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:23:45.267 ************************************ 00:23:45.267 START TEST scheduler_create_thread 00:23:45.267 ************************************ 00:23:45.267 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:23:45.267 15:53:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:23:45.267 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.267 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:45.267 2 00:23:45.267 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.267 15:53:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:23:45.267 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.267 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:45.267 3 00:23:45.267 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.267 15:53:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:23:45.267 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.267 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:45.267 4 00:23:45.268 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.268 15:53:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:23:45.268 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.268 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:45.561 5 00:23:45.561 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.561 15:53:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:23:45.561 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.561 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:45.561 6 00:23:45.561 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.561 15:53:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:23:45.561 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.561 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:45.561 7 00:23:45.561 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.561 15:53:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:23:45.561 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.561 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:45.561 8 00:23:45.561 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.561 15:53:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:23:45.561 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.561 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:45.561 9 00:23:45.561 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.561 15:53:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:23:45.561 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.561 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:45.561 10 00:23:45.561 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.561 15:53:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:23:45.561 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.561 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:45.561 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.561 15:53:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:23:45.561 15:53:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:23:45.561 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.561 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:45.561 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.561 15:53:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:23:45.561 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.561 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:45.561 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.561 15:53:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:23:45.561 15:53:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:23:45.561 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.561 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:46.127 15:53:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.127 ************************************ 00:23:46.127 END TEST scheduler_create_thread 00:23:46.127 ************************************ 00:23:46.127 00:23:46.127 real 0m0.591s 00:23:46.127 user 0m0.011s 00:23:46.127 sys 0m0.006s 00:23:46.127 15:53:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:46.127 15:53:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:46.127 15:53:18 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:23:46.127 15:53:18 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 57275 00:23:46.127 15:53:18 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 57275 ']' 00:23:46.127 15:53:18 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 57275 00:23:46.127 15:53:18 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:23:46.127 15:53:18 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:46.127 15:53:18 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57275 00:23:46.127 killing process with pid 57275 00:23:46.127 15:53:18 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:46.127 15:53:18 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:46.127 15:53:18 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57275' 00:23:46.127 15:53:18 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 57275 00:23:46.127 15:53:18 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 57275 00:23:46.384 [2024-11-05 15:53:18.723734] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:23:46.948 ************************************ 00:23:46.948 END TEST event_scheduler 00:23:46.948 ************************************ 00:23:46.948 00:23:46.948 real 0m2.975s 00:23:46.948 user 0m5.671s 00:23:46.948 sys 0m0.349s 00:23:46.948 15:53:19 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:46.948 15:53:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:23:47.207 15:53:19 event -- event/event.sh@51 -- # modprobe -n nbd 00:23:47.207 15:53:19 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:23:47.207 15:53:19 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:47.207 15:53:19 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:47.207 15:53:19 event -- common/autotest_common.sh@10 -- # set +x 00:23:47.207 ************************************ 00:23:47.207 START TEST app_repeat 00:23:47.207 ************************************ 00:23:47.207 15:53:19 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:23:47.207 15:53:19 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:47.207 15:53:19 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:47.207 15:53:19 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:23:47.207 15:53:19 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:23:47.207 15:53:19 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:23:47.207 15:53:19 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:23:47.207 15:53:19 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:23:47.207 15:53:19 event.app_repeat -- event/event.sh@19 -- # repeat_pid=57354 00:23:47.207 15:53:19 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:23:47.207 15:53:19 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57354' 00:23:47.207 Process app_repeat pid: 57354 00:23:47.207 15:53:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:23:47.207 spdk_app_start Round 0 00:23:47.207 15:53:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:23:47.207 15:53:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57354 /var/tmp/spdk-nbd.sock 00:23:47.207 15:53:19 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:23:47.207 15:53:19 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 57354 ']' 00:23:47.207 15:53:19 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:23:47.207 15:53:19 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:47.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:23:47.207 15:53:19 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:23:47.207 15:53:19 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:47.207 15:53:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:23:47.207 [2024-11-05 15:53:19.435311] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:23:47.207 [2024-11-05 15:53:19.435424] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57354 ] 00:23:47.207 [2024-11-05 15:53:19.598036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:47.464 [2024-11-05 15:53:19.701022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:47.464 [2024-11-05 15:53:19.701301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.030 15:53:20 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:48.030 15:53:20 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:23:48.030 15:53:20 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:23:48.288 Malloc0 00:23:48.288 15:53:20 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:23:48.546 Malloc1 00:23:48.546 15:53:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:23:48.546 15:53:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:48.546 15:53:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:23:48.546 15:53:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:23:48.546 15:53:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:48.546 15:53:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:23:48.546 15:53:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:23:48.546 15:53:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:48.546 15:53:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:23:48.546 15:53:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:48.546 15:53:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:48.546 15:53:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:48.546 15:53:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:23:48.546 15:53:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:48.546 15:53:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:48.546 15:53:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:23:48.803 /dev/nbd0 00:23:48.803 15:53:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:48.803 15:53:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:48.803 15:53:21 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:23:48.803 15:53:21 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:23:48.803 15:53:21 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:48.803 15:53:21 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:48.803 15:53:21 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:23:48.803 15:53:21 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:23:48.803 15:53:21 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:48.803 15:53:21 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:48.803 15:53:21 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:23:48.803 1+0 records in 00:23:48.803 1+0 records out 00:23:48.803 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000177297 s, 23.1 MB/s 00:23:48.803 15:53:21 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:48.803 15:53:21 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:23:48.803 15:53:21 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:48.803 15:53:21 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:48.803 15:53:21 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:23:48.803 15:53:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:48.803 15:53:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:48.803 15:53:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:23:49.061 /dev/nbd1 00:23:49.061 15:53:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:49.061 15:53:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:49.061 15:53:21 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:23:49.061 15:53:21 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:23:49.061 15:53:21 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:49.061 15:53:21 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:49.061 15:53:21 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:23:49.061 15:53:21 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:23:49.061 15:53:21 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:49.061 15:53:21 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:49.061 15:53:21 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:23:49.061 1+0 records in 00:23:49.061 1+0 records out 00:23:49.061 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268363 s, 15.3 MB/s 00:23:49.061 15:53:21 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:49.061 15:53:21 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:23:49.061 15:53:21 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:49.061 15:53:21 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:49.061 15:53:21 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:23:49.061 15:53:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:49.061 15:53:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:49.061 15:53:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:49.061 15:53:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:49.061 15:53:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:49.318 15:53:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:23:49.318 { 00:23:49.318 "nbd_device": "/dev/nbd0", 00:23:49.318 "bdev_name": "Malloc0" 00:23:49.318 }, 00:23:49.318 { 00:23:49.318 "nbd_device": "/dev/nbd1", 00:23:49.318 "bdev_name": "Malloc1" 00:23:49.318 } 00:23:49.318 ]' 00:23:49.318 15:53:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:49.318 15:53:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:23:49.318 { 00:23:49.318 "nbd_device": "/dev/nbd0", 00:23:49.318 "bdev_name": "Malloc0" 00:23:49.318 }, 00:23:49.318 { 00:23:49.318 "nbd_device": "/dev/nbd1", 00:23:49.318 "bdev_name": "Malloc1" 00:23:49.318 } 00:23:49.318 ]' 00:23:49.318 15:53:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:23:49.318 /dev/nbd1' 00:23:49.318 15:53:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:23:49.318 /dev/nbd1' 00:23:49.319 15:53:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:49.319 15:53:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:23:49.319 15:53:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:23:49.319 15:53:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:23:49.319 15:53:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:23:49.319 15:53:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:23:49.319 15:53:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:49.319 15:53:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:49.319 15:53:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:23:49.319 15:53:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:23:49.319 15:53:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:23:49.319 15:53:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:23:49.319 256+0 records in 00:23:49.319 256+0 records out 00:23:49.319 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00906419 s, 116 MB/s 00:23:49.319 15:53:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:49.319 15:53:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:23:49.319 256+0 records in 00:23:49.319 256+0 records out 00:23:49.319 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0184513 s, 56.8 MB/s 00:23:49.319 15:53:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:49.319 15:53:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:23:49.319 256+0 records in 00:23:49.319 256+0 records out 00:23:49.319 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019399 s, 54.1 MB/s 00:23:49.319 15:53:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:23:49.319 15:53:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:49.319 15:53:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:49.319 15:53:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:23:49.319 15:53:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:23:49.319 15:53:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:23:49.319 15:53:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:23:49.319 15:53:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:49.319 15:53:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:23:49.319 15:53:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:49.319 15:53:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:23:49.319 15:53:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:23:49.319 15:53:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:23:49.319 15:53:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:49.319 15:53:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:49.319 15:53:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:49.319 15:53:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:23:49.319 15:53:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:49.319 15:53:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:49.576 15:53:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:49.576 15:53:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:49.576 15:53:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:49.576 15:53:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:49.576 15:53:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:49.576 15:53:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:49.576 15:53:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:23:49.576 15:53:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:23:49.576 15:53:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:49.576 15:53:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:23:49.834 15:53:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:49.834 15:53:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:49.834 15:53:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:49.834 15:53:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:49.834 15:53:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:49.834 15:53:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:49.834 15:53:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:23:49.834 15:53:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:23:49.834 15:53:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:49.834 15:53:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:49.834 15:53:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:50.098 15:53:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:50.098 15:53:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:50.098 15:53:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:50.098 15:53:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:50.098 15:53:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:23:50.098 15:53:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:50.098 15:53:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:23:50.098 15:53:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:23:50.098 15:53:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:23:50.098 15:53:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:23:50.098 15:53:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:23:50.098 15:53:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:23:50.098 15:53:22 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:23:50.356 15:53:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:23:51.289 [2024-11-05 15:53:23.465971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:51.289 [2024-11-05 15:53:23.564716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.289 [2024-11-05 15:53:23.564726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.289 [2024-11-05 15:53:23.688784] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:23:51.289 [2024-11-05 15:53:23.688834] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:23:53.818 spdk_app_start Round 1 00:23:53.818 15:53:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:23:53.818 15:53:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:23:53.818 15:53:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57354 /var/tmp/spdk-nbd.sock 00:23:53.818 15:53:25 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 57354 ']' 00:23:53.818 15:53:25 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:23:53.818 15:53:25 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:53.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:23:53.818 15:53:25 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:23:53.818 15:53:25 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:53.818 15:53:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:23:53.818 15:53:25 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:53.818 15:53:25 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:23:53.818 15:53:25 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:23:53.818 Malloc0 00:23:53.818 15:53:26 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:23:54.077 Malloc1 00:23:54.077 15:53:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:23:54.077 15:53:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:54.077 15:53:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:23:54.077 15:53:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:23:54.077 15:53:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:54.077 15:53:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:23:54.077 15:53:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:23:54.077 15:53:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:54.077 15:53:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:23:54.077 15:53:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:54.077 15:53:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:54.077 15:53:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:54.077 15:53:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:23:54.077 15:53:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:54.077 15:53:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:54.077 15:53:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:23:54.338 /dev/nbd0 00:23:54.338 15:53:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:54.338 15:53:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:54.338 15:53:26 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:23:54.338 15:53:26 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:23:54.338 15:53:26 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:54.338 15:53:26 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:54.338 15:53:26 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:23:54.338 15:53:26 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:23:54.338 15:53:26 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:54.338 15:53:26 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:54.338 15:53:26 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:23:54.338 1+0 records in 00:23:54.338 1+0 records out 00:23:54.338 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00016495 s, 24.8 MB/s 00:23:54.338 15:53:26 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:54.338 15:53:26 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:23:54.338 15:53:26 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:54.338 15:53:26 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:54.338 15:53:26 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:23:54.338 15:53:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:54.338 15:53:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:54.338 15:53:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:23:54.655 /dev/nbd1 00:23:54.655 15:53:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:54.655 15:53:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:54.655 15:53:26 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:23:54.655 15:53:26 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:23:54.655 15:53:26 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:54.655 15:53:26 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:54.655 15:53:26 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:23:54.655 15:53:26 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:23:54.655 15:53:26 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:54.655 15:53:26 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:54.655 15:53:26 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:23:54.655 1+0 records in 00:23:54.655 1+0 records out 00:23:54.655 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000179219 s, 22.9 MB/s 00:23:54.655 15:53:26 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:54.655 15:53:26 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:23:54.656 15:53:26 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:54.656 15:53:26 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:54.656 15:53:26 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:23:54.656 15:53:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:54.656 15:53:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:54.656 15:53:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:54.656 15:53:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:54.656 15:53:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:54.914 15:53:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:23:54.914 { 00:23:54.914 "nbd_device": "/dev/nbd0", 00:23:54.914 "bdev_name": "Malloc0" 00:23:54.914 }, 00:23:54.914 { 00:23:54.914 "nbd_device": "/dev/nbd1", 00:23:54.914 "bdev_name": "Malloc1" 00:23:54.914 } 00:23:54.914 ]' 00:23:54.914 15:53:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:54.914 15:53:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:23:54.914 { 00:23:54.914 "nbd_device": "/dev/nbd0", 00:23:54.914 "bdev_name": "Malloc0" 00:23:54.914 }, 00:23:54.914 { 00:23:54.914 "nbd_device": "/dev/nbd1", 00:23:54.914 "bdev_name": "Malloc1" 00:23:54.914 } 00:23:54.914 ]' 00:23:54.914 15:53:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:23:54.914 /dev/nbd1' 00:23:54.914 15:53:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:23:54.914 /dev/nbd1' 00:23:54.914 15:53:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:54.914 15:53:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:23:54.914 15:53:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:23:54.914 15:53:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:23:54.914 15:53:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:23:54.914 15:53:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:23:54.914 15:53:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:54.914 15:53:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:54.914 15:53:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:23:54.914 15:53:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:23:54.914 15:53:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:23:54.914 15:53:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:23:54.914 256+0 records in 00:23:54.914 256+0 records out 00:23:54.914 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00693424 s, 151 MB/s 00:23:54.914 15:53:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:54.914 15:53:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:23:54.914 256+0 records in 00:23:54.914 256+0 records out 00:23:54.914 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0154583 s, 67.8 MB/s 00:23:54.914 15:53:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:54.914 15:53:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:23:54.914 256+0 records in 00:23:54.914 256+0 records out 00:23:54.914 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166693 s, 62.9 MB/s 00:23:54.915 15:53:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:23:54.915 15:53:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:54.915 15:53:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:54.915 15:53:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:23:54.915 15:53:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:23:54.915 15:53:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:23:54.915 15:53:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:23:54.915 15:53:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:54.915 15:53:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:23:54.915 15:53:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:54.915 15:53:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:23:54.915 15:53:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:23:54.915 15:53:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:23:54.915 15:53:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:54.915 15:53:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:54.915 15:53:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:54.915 15:53:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:23:54.915 15:53:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:54.915 15:53:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:55.173 15:53:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:55.173 15:53:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:55.173 15:53:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:55.173 15:53:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:55.173 15:53:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:55.173 15:53:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:55.173 15:53:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:23:55.173 15:53:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:23:55.173 15:53:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:55.173 15:53:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:23:55.433 15:53:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:55.433 15:53:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:55.433 15:53:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:55.433 15:53:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:55.433 15:53:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:55.433 15:53:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:55.433 15:53:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:23:55.433 15:53:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:23:55.433 15:53:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:55.433 15:53:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:55.433 15:53:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:55.692 15:53:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:55.692 15:53:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:55.692 15:53:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:55.692 15:53:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:55.692 15:53:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:23:55.692 15:53:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:55.692 15:53:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:23:55.692 15:53:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:23:55.692 15:53:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:23:55.692 15:53:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:23:55.692 15:53:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:23:55.692 15:53:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:23:55.692 15:53:27 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:23:55.978 15:53:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:23:56.544 [2024-11-05 15:53:28.734182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:56.544 [2024-11-05 15:53:28.816881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.544 [2024-11-05 15:53:28.816883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.544 [2024-11-05 15:53:28.922555] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:23:56.544 [2024-11-05 15:53:28.922613] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:23:59.076 spdk_app_start Round 2 00:23:59.076 15:53:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:23:59.076 15:53:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:23:59.076 15:53:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57354 /var/tmp/spdk-nbd.sock 00:23:59.076 15:53:31 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 57354 ']' 00:23:59.076 15:53:31 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:23:59.076 15:53:31 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:59.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:23:59.076 15:53:31 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:23:59.076 15:53:31 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:59.076 15:53:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:23:59.076 15:53:31 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:59.076 15:53:31 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:23:59.076 15:53:31 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:23:59.335 Malloc0 00:23:59.335 15:53:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:23:59.594 Malloc1 00:23:59.594 15:53:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:23:59.594 15:53:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:59.594 15:53:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:23:59.594 15:53:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:23:59.594 15:53:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:59.594 15:53:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:23:59.594 15:53:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:23:59.594 15:53:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:59.594 15:53:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:23:59.594 15:53:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:59.594 15:53:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:59.594 15:53:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:59.594 15:53:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:23:59.594 15:53:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:59.594 15:53:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:59.594 15:53:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:23:59.858 /dev/nbd0 00:23:59.859 15:53:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:59.859 15:53:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:59.859 15:53:32 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:23:59.859 15:53:32 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:23:59.859 15:53:32 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:59.859 15:53:32 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:59.859 15:53:32 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:23:59.859 15:53:32 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:23:59.859 15:53:32 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:59.859 15:53:32 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:59.859 15:53:32 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:23:59.859 1+0 records in 00:23:59.859 1+0 records out 00:23:59.859 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358856 s, 11.4 MB/s 00:23:59.859 15:53:32 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:59.859 15:53:32 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:23:59.859 15:53:32 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:59.859 15:53:32 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:59.859 15:53:32 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:23:59.859 15:53:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:59.859 15:53:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:59.859 15:53:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:24:00.117 /dev/nbd1 00:24:00.117 15:53:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:00.117 15:53:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:00.117 15:53:32 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:24:00.117 15:53:32 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:24:00.117 15:53:32 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:00.117 15:53:32 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:00.117 15:53:32 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:24:00.117 15:53:32 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:24:00.117 15:53:32 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:00.117 15:53:32 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:00.117 15:53:32 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:24:00.117 1+0 records in 00:24:00.117 1+0 records out 00:24:00.117 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321186 s, 12.8 MB/s 00:24:00.117 15:53:32 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:24:00.117 15:53:32 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:24:00.117 15:53:32 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:24:00.117 15:53:32 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:00.117 15:53:32 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:24:00.117 15:53:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:00.117 15:53:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:00.117 15:53:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:00.117 15:53:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:00.117 15:53:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:00.376 15:53:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:24:00.376 { 00:24:00.376 "nbd_device": "/dev/nbd0", 00:24:00.376 "bdev_name": "Malloc0" 00:24:00.376 }, 00:24:00.376 { 00:24:00.376 "nbd_device": "/dev/nbd1", 00:24:00.376 "bdev_name": "Malloc1" 00:24:00.376 } 00:24:00.376 ]' 00:24:00.376 15:53:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:24:00.376 { 00:24:00.376 "nbd_device": "/dev/nbd0", 00:24:00.376 "bdev_name": "Malloc0" 00:24:00.376 }, 00:24:00.376 { 00:24:00.376 "nbd_device": "/dev/nbd1", 00:24:00.376 "bdev_name": "Malloc1" 00:24:00.376 } 00:24:00.376 ]' 00:24:00.376 15:53:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:00.376 15:53:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:24:00.376 /dev/nbd1' 00:24:00.376 15:53:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:24:00.376 /dev/nbd1' 00:24:00.376 15:53:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:00.376 15:53:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:24:00.376 15:53:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:24:00.376 15:53:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:24:00.376 15:53:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:24:00.376 15:53:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:24:00.376 15:53:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:00.376 15:53:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:00.376 15:53:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:24:00.376 15:53:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:24:00.376 15:53:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:24:00.376 15:53:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:24:00.376 256+0 records in 00:24:00.376 256+0 records out 00:24:00.376 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0075566 s, 139 MB/s 00:24:00.376 15:53:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:00.376 15:53:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:24:00.376 256+0 records in 00:24:00.376 256+0 records out 00:24:00.376 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0169692 s, 61.8 MB/s 00:24:00.376 15:53:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:00.376 15:53:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:24:00.376 256+0 records in 00:24:00.376 256+0 records out 00:24:00.376 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0181484 s, 57.8 MB/s 00:24:00.376 15:53:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:24:00.376 15:53:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:00.376 15:53:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:00.376 15:53:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:24:00.376 15:53:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:24:00.377 15:53:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:24:00.377 15:53:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:24:00.377 15:53:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:00.377 15:53:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:24:00.377 15:53:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:00.377 15:53:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:24:00.377 15:53:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:24:00.377 15:53:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:24:00.377 15:53:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:00.377 15:53:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:00.377 15:53:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:00.377 15:53:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:24:00.377 15:53:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:00.377 15:53:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:00.635 15:53:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:00.635 15:53:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:00.635 15:53:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:00.635 15:53:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:00.635 15:53:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:00.635 15:53:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:00.635 15:53:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:24:00.635 15:53:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:24:00.635 15:53:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:00.635 15:53:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:24:00.894 15:53:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:00.894 15:53:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:00.894 15:53:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:00.894 15:53:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:00.894 15:53:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:00.894 15:53:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:00.894 15:53:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:24:00.894 15:53:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:24:00.894 15:53:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:00.894 15:53:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:00.894 15:53:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:01.153 15:53:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:24:01.153 15:53:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:24:01.153 15:53:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:01.153 15:53:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:24:01.153 15:53:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:01.153 15:53:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:24:01.153 15:53:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:24:01.153 15:53:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:24:01.153 15:53:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:24:01.153 15:53:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:24:01.153 15:53:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:24:01.153 15:53:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:24:01.153 15:53:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:24:01.411 15:53:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:24:01.979 [2024-11-05 15:53:34.280535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:01.979 [2024-11-05 15:53:34.366147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.979 [2024-11-05 15:53:34.366354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.237 [2024-11-05 15:53:34.472911] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:24:02.237 [2024-11-05 15:53:34.472974] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:24:04.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:24:04.792 15:53:36 event.app_repeat -- event/event.sh@38 -- # waitforlisten 57354 /var/tmp/spdk-nbd.sock 00:24:04.792 15:53:36 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 57354 ']' 00:24:04.792 15:53:36 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:24:04.792 15:53:36 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:04.792 15:53:36 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:24:04.792 15:53:36 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:04.792 15:53:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:24:04.792 15:53:36 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:04.792 15:53:36 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:24:04.792 15:53:36 event.app_repeat -- event/event.sh@39 -- # killprocess 57354 00:24:04.792 15:53:36 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 57354 ']' 00:24:04.792 15:53:36 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 57354 00:24:04.792 15:53:36 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:24:04.792 15:53:36 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:04.792 15:53:36 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57354 00:24:04.792 killing process with pid 57354 00:24:04.792 15:53:36 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:04.792 15:53:36 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:04.792 15:53:36 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57354' 00:24:04.792 15:53:36 event.app_repeat -- common/autotest_common.sh@971 -- # kill 57354 00:24:04.792 15:53:36 event.app_repeat -- common/autotest_common.sh@976 -- # wait 57354 00:24:05.358 spdk_app_start is called in Round 0. 00:24:05.358 Shutdown signal received, stop current app iteration 00:24:05.358 Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 reinitialization... 00:24:05.358 spdk_app_start is called in Round 1. 00:24:05.358 Shutdown signal received, stop current app iteration 00:24:05.358 Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 reinitialization... 00:24:05.358 spdk_app_start is called in Round 2. 00:24:05.358 Shutdown signal received, stop current app iteration 00:24:05.358 Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 reinitialization... 00:24:05.358 spdk_app_start is called in Round 3. 00:24:05.358 Shutdown signal received, stop current app iteration 00:24:05.358 ************************************ 00:24:05.358 END TEST app_repeat 00:24:05.358 ************************************ 00:24:05.358 15:53:37 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:24:05.358 15:53:37 event.app_repeat -- event/event.sh@42 -- # return 0 00:24:05.358 00:24:05.358 real 0m18.099s 00:24:05.358 user 0m39.771s 00:24:05.358 sys 0m2.111s 00:24:05.358 15:53:37 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:05.358 15:53:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:24:05.358 15:53:37 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:24:05.358 15:53:37 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:24:05.359 15:53:37 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:05.359 15:53:37 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:05.359 15:53:37 event -- common/autotest_common.sh@10 -- # set +x 00:24:05.359 ************************************ 00:24:05.359 START TEST cpu_locks 00:24:05.359 ************************************ 00:24:05.359 15:53:37 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:24:05.359 * Looking for test storage... 00:24:05.359 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:24:05.359 15:53:37 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:05.359 15:53:37 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:24:05.359 15:53:37 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:05.359 15:53:37 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:05.359 15:53:37 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:05.359 15:53:37 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:05.359 15:53:37 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:05.359 15:53:37 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:24:05.359 15:53:37 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:24:05.359 15:53:37 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:24:05.359 15:53:37 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:24:05.359 15:53:37 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:24:05.359 15:53:37 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:24:05.359 15:53:37 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:24:05.359 15:53:37 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:05.359 15:53:37 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:24:05.359 15:53:37 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:24:05.359 15:53:37 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:05.359 15:53:37 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:05.359 15:53:37 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:24:05.359 15:53:37 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:24:05.359 15:53:37 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:05.359 15:53:37 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:24:05.359 15:53:37 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:24:05.359 15:53:37 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:24:05.359 15:53:37 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:24:05.359 15:53:37 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:05.359 15:53:37 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:24:05.359 15:53:37 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:24:05.359 15:53:37 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:05.359 15:53:37 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:05.359 15:53:37 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:24:05.359 15:53:37 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:05.359 15:53:37 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:05.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.359 --rc genhtml_branch_coverage=1 00:24:05.359 --rc genhtml_function_coverage=1 00:24:05.359 --rc genhtml_legend=1 00:24:05.359 --rc geninfo_all_blocks=1 00:24:05.359 --rc geninfo_unexecuted_blocks=1 00:24:05.359 00:24:05.359 ' 00:24:05.359 15:53:37 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:05.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.359 --rc genhtml_branch_coverage=1 00:24:05.359 --rc genhtml_function_coverage=1 00:24:05.359 --rc genhtml_legend=1 00:24:05.359 --rc geninfo_all_blocks=1 00:24:05.359 --rc geninfo_unexecuted_blocks=1 00:24:05.359 00:24:05.359 ' 00:24:05.359 15:53:37 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:05.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.359 --rc genhtml_branch_coverage=1 00:24:05.359 --rc genhtml_function_coverage=1 00:24:05.359 --rc genhtml_legend=1 00:24:05.359 --rc geninfo_all_blocks=1 00:24:05.359 --rc geninfo_unexecuted_blocks=1 00:24:05.359 00:24:05.359 ' 00:24:05.359 15:53:37 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:05.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.359 --rc genhtml_branch_coverage=1 00:24:05.359 --rc genhtml_function_coverage=1 00:24:05.359 --rc genhtml_legend=1 00:24:05.359 --rc geninfo_all_blocks=1 00:24:05.359 --rc geninfo_unexecuted_blocks=1 00:24:05.359 00:24:05.359 ' 00:24:05.359 15:53:37 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:24:05.359 15:53:37 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:24:05.359 15:53:37 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:24:05.359 15:53:37 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:24:05.359 15:53:37 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:05.359 15:53:37 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:05.359 15:53:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:24:05.359 ************************************ 00:24:05.359 START TEST default_locks 00:24:05.359 ************************************ 00:24:05.359 15:53:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:24:05.359 15:53:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=57790 00:24:05.359 15:53:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 57790 00:24:05.359 15:53:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:05.359 15:53:37 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 57790 ']' 00:24:05.359 15:53:37 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.359 15:53:37 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:05.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.359 15:53:37 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.359 15:53:37 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:05.359 15:53:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:24:05.359 [2024-11-05 15:53:37.751573] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:24:05.359 [2024-11-05 15:53:37.751675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57790 ] 00:24:05.618 [2024-11-05 15:53:37.907120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.618 [2024-11-05 15:53:38.010971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.552 15:53:38 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:06.552 15:53:38 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:24:06.552 15:53:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 57790 00:24:06.552 15:53:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 57790 00:24:06.552 15:53:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:24:06.552 15:53:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 57790 00:24:06.552 15:53:38 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 57790 ']' 00:24:06.552 15:53:38 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 57790 00:24:06.552 15:53:38 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:24:06.552 15:53:38 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:06.552 15:53:38 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57790 00:24:06.552 15:53:38 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:06.552 15:53:38 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:06.552 killing process with pid 57790 00:24:06.552 15:53:38 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57790' 00:24:06.552 15:53:38 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 57790 00:24:06.552 15:53:38 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 57790 00:24:07.928 15:53:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 57790 00:24:07.928 15:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:24:07.928 15:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57790 00:24:07.928 15:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:24:07.928 15:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:07.928 15:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:24:07.928 15:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:07.928 15:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 57790 00:24:07.928 15:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 57790 ']' 00:24:07.928 15:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:07.928 15:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:07.928 15:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.928 15:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:07.928 15:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:24:07.928 ERROR: process (pid: 57790) is no longer running 00:24:07.928 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (57790) - No such process 00:24:07.928 15:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:07.928 15:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:24:07.928 15:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:24:07.928 15:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:07.928 15:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:07.929 15:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:07.929 15:53:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:24:07.929 15:53:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:24:07.929 15:53:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:24:07.929 15:53:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:24:07.929 00:24:07.929 real 0m2.612s 00:24:07.929 user 0m2.599s 00:24:07.929 sys 0m0.432s 00:24:07.929 ************************************ 00:24:07.929 END TEST default_locks 00:24:07.929 ************************************ 00:24:07.929 15:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:07.929 15:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:24:07.929 15:53:40 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:24:07.929 15:53:40 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:07.929 15:53:40 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:07.929 15:53:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:24:07.929 ************************************ 00:24:07.929 START TEST default_locks_via_rpc 00:24:07.929 ************************************ 00:24:07.929 15:53:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:24:07.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:07.929 15:53:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=57848 00:24:07.929 15:53:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 57848 00:24:07.929 15:53:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 57848 ']' 00:24:07.929 15:53:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.929 15:53:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:07.929 15:53:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.929 15:53:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:07.929 15:53:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:07.929 15:53:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:08.188 [2024-11-05 15:53:40.404508] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:24:08.188 [2024-11-05 15:53:40.404967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57848 ] 00:24:08.188 [2024-11-05 15:53:40.564889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.447 [2024-11-05 15:53:40.667220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.015 15:53:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:09.015 15:53:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:24:09.015 15:53:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:24:09.015 15:53:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.015 15:53:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:09.015 15:53:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.015 15:53:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:24:09.015 15:53:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:24:09.015 15:53:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:24:09.015 15:53:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:24:09.015 15:53:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:24:09.015 15:53:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.015 15:53:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:09.015 15:53:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.015 15:53:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 57848 00:24:09.015 15:53:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 57848 00:24:09.015 15:53:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:24:09.273 15:53:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 57848 00:24:09.273 15:53:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 57848 ']' 00:24:09.273 15:53:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 57848 00:24:09.273 15:53:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:24:09.273 15:53:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:09.273 15:53:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57848 00:24:09.273 15:53:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:09.273 15:53:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:09.273 killing process with pid 57848 00:24:09.273 15:53:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57848' 00:24:09.273 15:53:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 57848 00:24:09.273 15:53:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 57848 00:24:10.646 00:24:10.646 real 0m2.678s 00:24:10.646 user 0m2.729s 00:24:10.646 sys 0m0.439s 00:24:10.646 15:53:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:10.646 15:53:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:10.646 ************************************ 00:24:10.646 END TEST default_locks_via_rpc 00:24:10.646 ************************************ 00:24:10.646 15:53:43 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:24:10.646 15:53:43 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:10.646 15:53:43 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:10.646 15:53:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:24:10.646 ************************************ 00:24:10.646 START TEST non_locking_app_on_locked_coremask 00:24:10.646 ************************************ 00:24:10.646 15:53:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:24:10.646 15:53:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=57906 00:24:10.646 15:53:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 57906 /var/tmp/spdk.sock 00:24:10.646 15:53:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 57906 ']' 00:24:10.646 15:53:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.646 15:53:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:10.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.646 15:53:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.646 15:53:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:10.646 15:53:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:24:10.646 15:53:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:10.903 [2024-11-05 15:53:43.105994] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:24:10.903 [2024-11-05 15:53:43.106106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57906 ] 00:24:10.903 [2024-11-05 15:53:43.258606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.182 [2024-11-05 15:53:43.361241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.764 15:53:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:11.764 15:53:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:24:11.764 15:53:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=57922 00:24:11.764 15:53:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:24:11.764 15:53:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 57922 /var/tmp/spdk2.sock 00:24:11.764 15:53:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 57922 ']' 00:24:11.764 15:53:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:24:11.764 15:53:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:11.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:24:11.764 15:53:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:24:11.764 15:53:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:11.764 15:53:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:24:11.764 [2024-11-05 15:53:44.051272] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:24:11.764 [2024-11-05 15:53:44.051397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57922 ] 00:24:12.021 [2024-11-05 15:53:44.223547] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:24:12.021 [2024-11-05 15:53:44.223610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.021 [2024-11-05 15:53:44.429559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.393 15:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:13.393 15:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:24:13.393 15:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 57906 00:24:13.393 15:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 57906 00:24:13.393 15:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:24:13.651 15:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 57906 00:24:13.651 15:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 57906 ']' 00:24:13.651 15:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 57906 00:24:13.651 15:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:24:13.651 15:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:13.651 15:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57906 00:24:13.651 15:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:13.651 15:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:13.651 killing process with pid 57906 00:24:13.651 15:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57906' 00:24:13.651 15:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 57906 00:24:13.651 15:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 57906 00:24:16.935 15:53:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 57922 00:24:16.935 15:53:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 57922 ']' 00:24:16.935 15:53:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 57922 00:24:16.935 15:53:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:24:16.935 15:53:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:16.935 15:53:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57922 00:24:16.935 15:53:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:16.935 15:53:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:16.935 killing process with pid 57922 00:24:16.935 15:53:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57922' 00:24:16.935 15:53:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 57922 00:24:16.935 15:53:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 57922 00:24:17.868 00:24:17.868 real 0m6.903s 00:24:17.868 user 0m7.155s 00:24:17.868 sys 0m0.809s 00:24:17.868 15:53:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:17.868 15:53:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:24:17.868 ************************************ 00:24:17.868 END TEST non_locking_app_on_locked_coremask 00:24:17.868 ************************************ 00:24:17.868 15:53:49 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:24:17.868 15:53:49 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:17.868 15:53:49 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:17.868 15:53:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:24:17.868 ************************************ 00:24:17.868 START TEST locking_app_on_unlocked_coremask 00:24:17.868 ************************************ 00:24:17.868 15:53:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:24:17.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.868 15:53:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58024 00:24:17.869 15:53:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58024 /var/tmp/spdk.sock 00:24:17.869 15:53:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58024 ']' 00:24:17.869 15:53:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.869 15:53:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:17.869 15:53:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.869 15:53:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:17.869 15:53:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:24:17.869 15:53:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:24:17.869 [2024-11-05 15:53:50.052834] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:24:17.869 [2024-11-05 15:53:50.052971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58024 ] 00:24:17.869 [2024-11-05 15:53:50.210784] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:24:17.869 [2024-11-05 15:53:50.210836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.126 [2024-11-05 15:53:50.311064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:24:18.691 15:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:18.691 15:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:24:18.691 15:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:24:18.691 15:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58040 00:24:18.691 15:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58040 /var/tmp/spdk2.sock 00:24:18.691 15:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58040 ']' 00:24:18.691 15:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:24:18.691 15:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:18.691 15:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:24:18.691 15:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:18.691 15:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:24:18.691 [2024-11-05 15:53:51.006189] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:24:18.691 [2024-11-05 15:53:51.006312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58040 ] 00:24:18.948 [2024-11-05 15:53:51.178084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.206 [2024-11-05 15:53:51.383166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:20.141 15:53:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:20.141 15:53:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:24:20.141 15:53:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58040 00:24:20.141 15:53:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58040 00:24:20.141 15:53:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:24:20.751 15:53:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58024 00:24:20.751 15:53:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58024 ']' 00:24:20.751 15:53:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 58024 00:24:20.751 15:53:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:24:20.751 15:53:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:20.751 15:53:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58024 00:24:20.751 15:53:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:20.751 killing process with pid 58024 00:24:20.751 15:53:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:20.751 15:53:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58024' 00:24:20.751 15:53:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 58024 00:24:20.751 15:53:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 58024 00:24:24.032 15:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58040 00:24:24.032 15:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58040 ']' 00:24:24.032 15:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 58040 00:24:24.032 15:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:24:24.032 15:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:24.032 15:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58040 00:24:24.032 killing process with pid 58040 00:24:24.032 15:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:24.032 15:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:24.032 15:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58040' 00:24:24.032 15:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 58040 00:24:24.032 15:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 58040 00:24:24.598 ************************************ 00:24:24.598 END TEST locking_app_on_unlocked_coremask 00:24:24.598 ************************************ 00:24:24.598 00:24:24.598 real 0m6.941s 00:24:24.598 user 0m7.219s 00:24:24.598 sys 0m0.835s 00:24:24.598 15:53:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:24.598 15:53:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:24:24.598 15:53:56 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:24:24.598 15:53:56 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:24.598 15:53:56 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:24.598 15:53:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:24:24.598 ************************************ 00:24:24.598 START TEST locking_app_on_locked_coremask 00:24:24.598 ************************************ 00:24:24.598 15:53:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:24:24.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.598 15:53:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58142 00:24:24.598 15:53:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58142 /var/tmp/spdk.sock 00:24:24.598 15:53:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58142 ']' 00:24:24.598 15:53:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.598 15:53:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:24.598 15:53:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.598 15:53:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:24.598 15:53:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:24:24.598 15:53:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:24.855 [2024-11-05 15:53:57.018197] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:24:24.855 [2024-11-05 15:53:57.018321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58142 ] 00:24:24.855 [2024-11-05 15:53:57.180031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.112 [2024-11-05 15:53:57.284285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.678 15:53:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:25.678 15:53:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:24:25.678 15:53:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58158 00:24:25.678 15:53:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58158 /var/tmp/spdk2.sock 00:24:25.678 15:53:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:24:25.678 15:53:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58158 /var/tmp/spdk2.sock 00:24:25.678 15:53:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:24:25.678 15:53:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:24:25.678 15:53:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:25.678 15:53:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:24:25.678 15:53:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:25.678 15:53:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 58158 /var/tmp/spdk2.sock 00:24:25.678 15:53:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58158 ']' 00:24:25.678 15:53:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:24:25.678 15:53:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:25.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:24:25.678 15:53:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:24:25.678 15:53:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:25.678 15:53:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:24:25.678 [2024-11-05 15:53:57.967762] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:24:25.678 [2024-11-05 15:53:57.967900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58158 ] 00:24:25.938 [2024-11-05 15:53:58.141180] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58142 has claimed it. 00:24:25.938 [2024-11-05 15:53:58.141263] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:24:26.209 ERROR: process (pid: 58158) is no longer running 00:24:26.209 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58158) - No such process 00:24:26.209 15:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:26.209 15:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:24:26.209 15:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:24:26.209 15:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:26.209 15:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:26.209 15:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:26.209 15:53:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58142 00:24:26.209 15:53:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58142 00:24:26.209 15:53:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:24:26.472 15:53:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58142 00:24:26.472 15:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58142 ']' 00:24:26.472 15:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58142 00:24:26.472 15:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:24:26.472 15:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:26.472 15:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58142 00:24:26.472 15:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:26.472 15:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:26.472 killing process with pid 58142 00:24:26.472 15:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58142' 00:24:26.472 15:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58142 00:24:26.472 15:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58142 00:24:27.848 00:24:27.848 real 0m3.158s 00:24:27.848 user 0m3.383s 00:24:27.848 sys 0m0.558s 00:24:27.848 15:54:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:27.848 15:54:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:24:27.848 ************************************ 00:24:27.848 END TEST locking_app_on_locked_coremask 00:24:27.848 ************************************ 00:24:27.848 15:54:00 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:24:27.848 15:54:00 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:27.848 15:54:00 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:27.848 15:54:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:24:27.848 ************************************ 00:24:27.848 START TEST locking_overlapped_coremask 00:24:27.848 ************************************ 00:24:27.848 15:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:24:27.848 15:54:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58211 00:24:27.848 15:54:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58211 /var/tmp/spdk.sock 00:24:27.848 15:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 58211 ']' 00:24:27.848 15:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.848 15:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:27.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.848 15:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.848 15:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:27.848 15:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:24:27.848 15:54:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:24:27.848 [2024-11-05 15:54:00.218968] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:24:27.848 [2024-11-05 15:54:00.219421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58211 ] 00:24:28.107 [2024-11-05 15:54:00.370820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:28.107 [2024-11-05 15:54:00.458240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.107 [2024-11-05 15:54:00.458495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:28.107 [2024-11-05 15:54:00.458710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.711 15:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:28.711 15:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:24:28.711 15:54:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58229 00:24:28.711 15:54:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58229 /var/tmp/spdk2.sock 00:24:28.711 15:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:24:28.711 15:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58229 /var/tmp/spdk2.sock 00:24:28.711 15:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:24:28.711 15:54:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:24:28.711 15:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:28.711 15:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:24:28.711 15:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:28.711 15:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 58229 /var/tmp/spdk2.sock 00:24:28.711 15:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 58229 ']' 00:24:28.711 15:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:24:28.711 15:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:28.711 15:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:24:28.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:24:28.712 15:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:28.712 15:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:24:28.712 [2024-11-05 15:54:01.052960] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:24:28.712 [2024-11-05 15:54:01.053076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58229 ] 00:24:28.971 [2024-11-05 15:54:01.226868] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58211 has claimed it. 00:24:28.971 [2024-11-05 15:54:01.226943] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:24:29.538 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58229) - No such process 00:24:29.538 ERROR: process (pid: 58229) is no longer running 00:24:29.538 15:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:29.538 15:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:24:29.538 15:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:24:29.538 15:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:29.538 15:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:29.538 15:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:29.538 15:54:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:24:29.538 15:54:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:24:29.538 15:54:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:24:29.538 15:54:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:24:29.538 15:54:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58211 00:24:29.538 15:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 58211 ']' 00:24:29.538 15:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 58211 00:24:29.538 15:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:24:29.538 15:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:29.538 15:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58211 00:24:29.538 15:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:29.538 15:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:29.538 killing process with pid 58211 00:24:29.538 15:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58211' 00:24:29.538 15:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 58211 00:24:29.538 15:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 58211 00:24:30.912 00:24:30.912 real 0m2.773s 00:24:30.912 user 0m7.442s 00:24:30.912 sys 0m0.445s 00:24:30.912 15:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:30.912 15:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:24:30.912 ************************************ 00:24:30.912 END TEST locking_overlapped_coremask 00:24:30.912 ************************************ 00:24:30.912 15:54:02 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:24:30.912 15:54:02 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:30.912 15:54:02 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:30.912 15:54:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:24:30.912 ************************************ 00:24:30.912 START TEST locking_overlapped_coremask_via_rpc 00:24:30.912 ************************************ 00:24:30.912 15:54:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:24:30.912 15:54:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58282 00:24:30.912 15:54:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 58282 /var/tmp/spdk.sock 00:24:30.912 15:54:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58282 ']' 00:24:30.912 15:54:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.912 15:54:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:30.912 15:54:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:24:30.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.912 15:54:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.912 15:54:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:30.912 15:54:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:30.912 [2024-11-05 15:54:03.022431] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:24:30.912 [2024-11-05 15:54:03.022535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58282 ] 00:24:30.912 [2024-11-05 15:54:03.173345] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:24:30.912 [2024-11-05 15:54:03.173395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:30.912 [2024-11-05 15:54:03.301114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:30.912 [2024-11-05 15:54:03.301219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.912 [2024-11-05 15:54:03.301233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:31.479 15:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:31.479 15:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:24:31.479 15:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:24:31.479 15:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58295 00:24:31.479 15:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 58295 /var/tmp/spdk2.sock 00:24:31.479 15:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58295 ']' 00:24:31.479 15:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:24:31.479 15:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:31.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:24:31.479 15:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:24:31.479 15:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:31.479 15:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:31.479 [2024-11-05 15:54:03.889061] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:24:31.479 [2024-11-05 15:54:03.889162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58295 ] 00:24:31.737 [2024-11-05 15:54:04.055829] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:24:31.737 [2024-11-05 15:54:04.055893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:31.994 [2024-11-05 15:54:04.261435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:31.994 [2024-11-05 15:54:04.264898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:31.994 [2024-11-05 15:54:04.264904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:33.368 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:33.368 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:24:33.368 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:24:33.368 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.368 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:33.368 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.368 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:24:33.368 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:24:33.368 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:24:33.368 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:33.368 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:33.368 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:33.368 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:33.368 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:24:33.368 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.368 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:33.368 [2024-11-05 15:54:05.415999] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58282 has claimed it. 00:24:33.368 request: 00:24:33.368 { 00:24:33.368 "method": "framework_enable_cpumask_locks", 00:24:33.368 "req_id": 1 00:24:33.368 } 00:24:33.368 Got JSON-RPC error response 00:24:33.368 response: 00:24:33.368 { 00:24:33.368 "code": -32603, 00:24:33.368 "message": "Failed to claim CPU core: 2" 00:24:33.368 } 00:24:33.368 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:33.368 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:24:33.368 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:33.368 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:33.368 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:33.368 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 58282 /var/tmp/spdk.sock 00:24:33.368 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58282 ']' 00:24:33.368 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.368 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:33.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.368 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.368 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:33.368 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:33.368 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:33.368 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:24:33.368 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 58295 /var/tmp/spdk2.sock 00:24:33.368 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58295 ']' 00:24:33.368 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:24:33.368 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:33.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:24:33.368 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:24:33.368 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:33.368 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:33.627 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:33.627 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:24:33.627 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:24:33.627 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:24:33.627 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:24:33.627 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:24:33.627 00:24:33.627 real 0m2.861s 00:24:33.627 user 0m0.973s 00:24:33.627 sys 0m0.130s 00:24:33.627 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:33.627 15:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:33.627 ************************************ 00:24:33.627 END TEST locking_overlapped_coremask_via_rpc 00:24:33.627 ************************************ 00:24:33.627 15:54:05 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:24:33.627 15:54:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58282 ]] 00:24:33.627 15:54:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58282 00:24:33.627 15:54:05 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 58282 ']' 00:24:33.627 15:54:05 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 58282 00:24:33.627 15:54:05 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:24:33.627 15:54:05 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:33.627 15:54:05 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58282 00:24:33.627 15:54:05 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:33.627 15:54:05 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:33.627 15:54:05 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58282' 00:24:33.627 killing process with pid 58282 00:24:33.627 15:54:05 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 58282 00:24:33.627 15:54:05 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 58282 00:24:35.003 15:54:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58295 ]] 00:24:35.003 15:54:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58295 00:24:35.003 15:54:07 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 58295 ']' 00:24:35.003 15:54:07 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 58295 00:24:35.003 15:54:07 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:24:35.003 15:54:07 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:35.003 15:54:07 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58295 00:24:35.003 15:54:07 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:35.003 15:54:07 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:35.003 killing process with pid 58295 00:24:35.003 15:54:07 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58295' 00:24:35.003 15:54:07 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 58295 00:24:35.003 15:54:07 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 58295 00:24:35.970 15:54:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:24:35.970 15:54:08 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:24:35.970 15:54:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58282 ]] 00:24:35.970 15:54:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58282 00:24:35.970 15:54:08 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 58282 ']' 00:24:35.970 15:54:08 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 58282 00:24:35.970 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (58282) - No such process 00:24:35.970 15:54:08 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 58282 is not found' 00:24:35.970 Process with pid 58282 is not found 00:24:35.970 15:54:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58295 ]] 00:24:35.970 15:54:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58295 00:24:35.970 15:54:08 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 58295 ']' 00:24:35.970 15:54:08 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 58295 00:24:35.970 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (58295) - No such process 00:24:35.970 Process with pid 58295 is not found 00:24:35.970 15:54:08 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 58295 is not found' 00:24:35.970 15:54:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:24:35.970 00:24:35.970 real 0m30.787s 00:24:35.970 user 0m51.626s 00:24:35.970 sys 0m4.392s 00:24:35.970 15:54:08 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:35.970 ************************************ 00:24:35.970 15:54:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:24:35.970 END TEST cpu_locks 00:24:35.970 ************************************ 00:24:35.970 00:24:35.970 real 0m56.654s 00:24:35.970 user 1m44.076s 00:24:35.970 sys 0m7.285s 00:24:35.970 15:54:08 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:35.970 ************************************ 00:24:35.970 END TEST event 00:24:35.970 15:54:08 event -- common/autotest_common.sh@10 -- # set +x 00:24:35.970 ************************************ 00:24:36.229 15:54:08 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:24:36.229 15:54:08 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:36.229 15:54:08 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:36.229 15:54:08 -- common/autotest_common.sh@10 -- # set +x 00:24:36.229 ************************************ 00:24:36.229 START TEST thread 00:24:36.229 ************************************ 00:24:36.229 15:54:08 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:24:36.229 * Looking for test storage... 00:24:36.229 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:24:36.229 15:54:08 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:36.229 15:54:08 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:24:36.229 15:54:08 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:36.229 15:54:08 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:36.229 15:54:08 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:36.229 15:54:08 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:36.229 15:54:08 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:36.229 15:54:08 thread -- scripts/common.sh@336 -- # IFS=.-: 00:24:36.229 15:54:08 thread -- scripts/common.sh@336 -- # read -ra ver1 00:24:36.229 15:54:08 thread -- scripts/common.sh@337 -- # IFS=.-: 00:24:36.229 15:54:08 thread -- scripts/common.sh@337 -- # read -ra ver2 00:24:36.229 15:54:08 thread -- scripts/common.sh@338 -- # local 'op=<' 00:24:36.229 15:54:08 thread -- scripts/common.sh@340 -- # ver1_l=2 00:24:36.229 15:54:08 thread -- scripts/common.sh@341 -- # ver2_l=1 00:24:36.229 15:54:08 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:36.229 15:54:08 thread -- scripts/common.sh@344 -- # case "$op" in 00:24:36.229 15:54:08 thread -- scripts/common.sh@345 -- # : 1 00:24:36.229 15:54:08 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:36.229 15:54:08 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:36.229 15:54:08 thread -- scripts/common.sh@365 -- # decimal 1 00:24:36.229 15:54:08 thread -- scripts/common.sh@353 -- # local d=1 00:24:36.229 15:54:08 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:36.229 15:54:08 thread -- scripts/common.sh@355 -- # echo 1 00:24:36.229 15:54:08 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:24:36.229 15:54:08 thread -- scripts/common.sh@366 -- # decimal 2 00:24:36.229 15:54:08 thread -- scripts/common.sh@353 -- # local d=2 00:24:36.229 15:54:08 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:36.229 15:54:08 thread -- scripts/common.sh@355 -- # echo 2 00:24:36.229 15:54:08 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:24:36.229 15:54:08 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:36.229 15:54:08 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:36.229 15:54:08 thread -- scripts/common.sh@368 -- # return 0 00:24:36.229 15:54:08 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:36.229 15:54:08 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:36.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.229 --rc genhtml_branch_coverage=1 00:24:36.229 --rc genhtml_function_coverage=1 00:24:36.229 --rc genhtml_legend=1 00:24:36.229 --rc geninfo_all_blocks=1 00:24:36.229 --rc geninfo_unexecuted_blocks=1 00:24:36.229 00:24:36.229 ' 00:24:36.229 15:54:08 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:36.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.229 --rc genhtml_branch_coverage=1 00:24:36.229 --rc genhtml_function_coverage=1 00:24:36.229 --rc genhtml_legend=1 00:24:36.229 --rc geninfo_all_blocks=1 00:24:36.229 --rc geninfo_unexecuted_blocks=1 00:24:36.229 00:24:36.229 ' 00:24:36.229 15:54:08 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:36.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.229 --rc genhtml_branch_coverage=1 00:24:36.229 --rc genhtml_function_coverage=1 00:24:36.229 --rc genhtml_legend=1 00:24:36.229 --rc geninfo_all_blocks=1 00:24:36.229 --rc geninfo_unexecuted_blocks=1 00:24:36.229 00:24:36.229 ' 00:24:36.229 15:54:08 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:36.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.229 --rc genhtml_branch_coverage=1 00:24:36.229 --rc genhtml_function_coverage=1 00:24:36.229 --rc genhtml_legend=1 00:24:36.229 --rc geninfo_all_blocks=1 00:24:36.229 --rc geninfo_unexecuted_blocks=1 00:24:36.229 00:24:36.229 ' 00:24:36.229 15:54:08 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:24:36.229 15:54:08 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:24:36.229 15:54:08 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:36.229 15:54:08 thread -- common/autotest_common.sh@10 -- # set +x 00:24:36.229 ************************************ 00:24:36.229 START TEST thread_poller_perf 00:24:36.229 ************************************ 00:24:36.229 15:54:08 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:24:36.229 [2024-11-05 15:54:08.556976] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:24:36.229 [2024-11-05 15:54:08.557090] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58449 ] 00:24:36.488 [2024-11-05 15:54:08.712351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.488 [2024-11-05 15:54:08.795486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.488 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:24:37.860 [2024-11-05T15:54:10.275Z] ====================================== 00:24:37.860 [2024-11-05T15:54:10.275Z] busy:2608500774 (cyc) 00:24:37.860 [2024-11-05T15:54:10.275Z] total_run_count: 381000 00:24:37.860 [2024-11-05T15:54:10.275Z] tsc_hz: 2600000000 (cyc) 00:24:37.860 [2024-11-05T15:54:10.275Z] ====================================== 00:24:37.860 [2024-11-05T15:54:10.275Z] poller_cost: 6846 (cyc), 2633 (nsec) 00:24:37.860 00:24:37.860 real 0m1.401s 00:24:37.860 user 0m1.227s 00:24:37.860 sys 0m0.067s 00:24:37.860 15:54:09 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:37.860 ************************************ 00:24:37.860 END TEST thread_poller_perf 00:24:37.860 15:54:09 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:24:37.860 ************************************ 00:24:37.860 15:54:09 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:24:37.860 15:54:09 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:24:37.860 15:54:09 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:37.860 15:54:09 thread -- common/autotest_common.sh@10 -- # set +x 00:24:37.860 ************************************ 00:24:37.860 START TEST thread_poller_perf 00:24:37.860 ************************************ 00:24:37.860 15:54:09 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:24:37.860 [2024-11-05 15:54:09.995034] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:24:37.860 [2024-11-05 15:54:09.995150] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58486 ] 00:24:37.860 [2024-11-05 15:54:10.152783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.860 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:24:37.860 [2024-11-05 15:54:10.254742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.230 [2024-11-05T15:54:11.645Z] ====================================== 00:24:39.230 [2024-11-05T15:54:11.645Z] busy:2603441662 (cyc) 00:24:39.230 [2024-11-05T15:54:11.645Z] total_run_count: 3924000 00:24:39.230 [2024-11-05T15:54:11.645Z] tsc_hz: 2600000000 (cyc) 00:24:39.230 [2024-11-05T15:54:11.645Z] ====================================== 00:24:39.230 [2024-11-05T15:54:11.645Z] poller_cost: 663 (cyc), 255 (nsec) 00:24:39.230 00:24:39.230 real 0m1.450s 00:24:39.230 user 0m1.267s 00:24:39.230 sys 0m0.073s 00:24:39.230 15:54:11 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:39.230 ************************************ 00:24:39.230 END TEST thread_poller_perf 00:24:39.230 ************************************ 00:24:39.230 15:54:11 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:24:39.230 15:54:11 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:24:39.230 00:24:39.230 real 0m3.065s 00:24:39.230 user 0m2.608s 00:24:39.230 sys 0m0.245s 00:24:39.230 15:54:11 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:39.230 15:54:11 thread -- common/autotest_common.sh@10 -- # set +x 00:24:39.230 ************************************ 00:24:39.230 END TEST thread 00:24:39.230 ************************************ 00:24:39.230 15:54:11 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:24:39.230 15:54:11 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:24:39.230 15:54:11 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:39.230 15:54:11 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:39.230 15:54:11 -- common/autotest_common.sh@10 -- # set +x 00:24:39.230 ************************************ 00:24:39.230 START TEST app_cmdline 00:24:39.230 ************************************ 00:24:39.230 15:54:11 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:24:39.230 * Looking for test storage... 00:24:39.230 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:24:39.230 15:54:11 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:39.230 15:54:11 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:24:39.230 15:54:11 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:39.230 15:54:11 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:39.230 15:54:11 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:39.230 15:54:11 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:39.230 15:54:11 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:39.230 15:54:11 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:24:39.230 15:54:11 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:24:39.230 15:54:11 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:24:39.230 15:54:11 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:24:39.230 15:54:11 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:24:39.230 15:54:11 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:24:39.230 15:54:11 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:24:39.230 15:54:11 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:39.230 15:54:11 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:24:39.230 15:54:11 app_cmdline -- scripts/common.sh@345 -- # : 1 00:24:39.230 15:54:11 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:39.230 15:54:11 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:39.230 15:54:11 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:24:39.230 15:54:11 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:24:39.230 15:54:11 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:39.230 15:54:11 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:24:39.230 15:54:11 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:24:39.230 15:54:11 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:24:39.230 15:54:11 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:24:39.230 15:54:11 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:39.230 15:54:11 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:24:39.230 15:54:11 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:24:39.230 15:54:11 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:39.230 15:54:11 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:39.230 15:54:11 app_cmdline -- scripts/common.sh@368 -- # return 0 00:24:39.230 15:54:11 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:39.230 15:54:11 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:39.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.230 --rc genhtml_branch_coverage=1 00:24:39.230 --rc genhtml_function_coverage=1 00:24:39.230 --rc genhtml_legend=1 00:24:39.230 --rc geninfo_all_blocks=1 00:24:39.230 --rc geninfo_unexecuted_blocks=1 00:24:39.230 00:24:39.230 ' 00:24:39.230 15:54:11 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:39.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.230 --rc genhtml_branch_coverage=1 00:24:39.230 --rc genhtml_function_coverage=1 00:24:39.230 --rc genhtml_legend=1 00:24:39.230 --rc geninfo_all_blocks=1 00:24:39.230 --rc geninfo_unexecuted_blocks=1 00:24:39.230 00:24:39.230 ' 00:24:39.230 15:54:11 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:39.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.230 --rc genhtml_branch_coverage=1 00:24:39.230 --rc genhtml_function_coverage=1 00:24:39.230 --rc genhtml_legend=1 00:24:39.230 --rc geninfo_all_blocks=1 00:24:39.230 --rc geninfo_unexecuted_blocks=1 00:24:39.230 00:24:39.230 ' 00:24:39.231 15:54:11 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:39.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.231 --rc genhtml_branch_coverage=1 00:24:39.231 --rc genhtml_function_coverage=1 00:24:39.231 --rc genhtml_legend=1 00:24:39.231 --rc geninfo_all_blocks=1 00:24:39.231 --rc geninfo_unexecuted_blocks=1 00:24:39.231 00:24:39.231 ' 00:24:39.231 15:54:11 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:24:39.231 15:54:11 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=58569 00:24:39.231 15:54:11 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 58569 00:24:39.231 15:54:11 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 58569 ']' 00:24:39.231 15:54:11 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.231 15:54:11 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:39.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.231 15:54:11 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.231 15:54:11 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:39.231 15:54:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:24:39.231 15:54:11 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:24:39.488 [2024-11-05 15:54:11.697685] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:24:39.488 [2024-11-05 15:54:11.697820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58569 ] 00:24:39.488 [2024-11-05 15:54:11.859641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.745 [2024-11-05 15:54:11.958972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.311 15:54:12 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:40.311 15:54:12 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:24:40.311 15:54:12 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:24:40.311 { 00:24:40.311 "version": "SPDK v25.01-pre git sha1 f220d590c", 00:24:40.311 "fields": { 00:24:40.311 "major": 25, 00:24:40.311 "minor": 1, 00:24:40.311 "patch": 0, 00:24:40.311 "suffix": "-pre", 00:24:40.311 "commit": "f220d590c" 00:24:40.311 } 00:24:40.311 } 00:24:40.569 15:54:12 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:24:40.569 15:54:12 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:24:40.569 15:54:12 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:24:40.569 15:54:12 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:24:40.569 15:54:12 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:24:40.569 15:54:12 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.569 15:54:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:24:40.569 15:54:12 app_cmdline -- app/cmdline.sh@26 -- # sort 00:24:40.569 15:54:12 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:24:40.569 15:54:12 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.569 15:54:12 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:24:40.569 15:54:12 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:24:40.569 15:54:12 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:24:40.569 15:54:12 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:24:40.569 15:54:12 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:24:40.569 15:54:12 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:40.569 15:54:12 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:40.569 15:54:12 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:40.569 15:54:12 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:40.569 15:54:12 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:40.569 15:54:12 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:40.569 15:54:12 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:40.569 15:54:12 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:40.569 15:54:12 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:24:40.569 request: 00:24:40.569 { 00:24:40.569 "method": "env_dpdk_get_mem_stats", 00:24:40.569 "req_id": 1 00:24:40.569 } 00:24:40.569 Got JSON-RPC error response 00:24:40.569 response: 00:24:40.569 { 00:24:40.569 "code": -32601, 00:24:40.569 "message": "Method not found" 00:24:40.569 } 00:24:40.569 15:54:12 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:24:40.569 15:54:12 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:40.569 15:54:12 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:40.569 15:54:12 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:40.569 15:54:12 app_cmdline -- app/cmdline.sh@1 -- # killprocess 58569 00:24:40.569 15:54:12 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 58569 ']' 00:24:40.569 15:54:12 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 58569 00:24:40.569 15:54:12 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:24:40.569 15:54:12 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:40.569 15:54:12 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58569 00:24:40.826 15:54:13 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:40.826 15:54:13 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:40.826 killing process with pid 58569 00:24:40.826 15:54:13 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58569' 00:24:40.826 15:54:13 app_cmdline -- common/autotest_common.sh@971 -- # kill 58569 00:24:40.826 15:54:13 app_cmdline -- common/autotest_common.sh@976 -- # wait 58569 00:24:42.197 00:24:42.197 real 0m3.025s 00:24:42.197 user 0m3.286s 00:24:42.197 sys 0m0.428s 00:24:42.197 15:54:14 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:42.197 15:54:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:24:42.197 ************************************ 00:24:42.197 END TEST app_cmdline 00:24:42.197 ************************************ 00:24:42.197 15:54:14 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:24:42.197 15:54:14 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:42.197 15:54:14 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:42.197 15:54:14 -- common/autotest_common.sh@10 -- # set +x 00:24:42.197 ************************************ 00:24:42.197 START TEST version 00:24:42.197 ************************************ 00:24:42.197 15:54:14 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:24:42.197 * Looking for test storage... 00:24:42.455 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:24:42.455 15:54:14 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:42.455 15:54:14 version -- common/autotest_common.sh@1691 -- # lcov --version 00:24:42.455 15:54:14 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:42.455 15:54:14 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:42.455 15:54:14 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:42.455 15:54:14 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:42.455 15:54:14 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:42.455 15:54:14 version -- scripts/common.sh@336 -- # IFS=.-: 00:24:42.455 15:54:14 version -- scripts/common.sh@336 -- # read -ra ver1 00:24:42.455 15:54:14 version -- scripts/common.sh@337 -- # IFS=.-: 00:24:42.455 15:54:14 version -- scripts/common.sh@337 -- # read -ra ver2 00:24:42.455 15:54:14 version -- scripts/common.sh@338 -- # local 'op=<' 00:24:42.455 15:54:14 version -- scripts/common.sh@340 -- # ver1_l=2 00:24:42.455 15:54:14 version -- scripts/common.sh@341 -- # ver2_l=1 00:24:42.455 15:54:14 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:42.455 15:54:14 version -- scripts/common.sh@344 -- # case "$op" in 00:24:42.455 15:54:14 version -- scripts/common.sh@345 -- # : 1 00:24:42.455 15:54:14 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:42.455 15:54:14 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:42.455 15:54:14 version -- scripts/common.sh@365 -- # decimal 1 00:24:42.455 15:54:14 version -- scripts/common.sh@353 -- # local d=1 00:24:42.455 15:54:14 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:42.456 15:54:14 version -- scripts/common.sh@355 -- # echo 1 00:24:42.456 15:54:14 version -- scripts/common.sh@365 -- # ver1[v]=1 00:24:42.456 15:54:14 version -- scripts/common.sh@366 -- # decimal 2 00:24:42.456 15:54:14 version -- scripts/common.sh@353 -- # local d=2 00:24:42.456 15:54:14 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:42.456 15:54:14 version -- scripts/common.sh@355 -- # echo 2 00:24:42.456 15:54:14 version -- scripts/common.sh@366 -- # ver2[v]=2 00:24:42.456 15:54:14 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:42.456 15:54:14 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:42.456 15:54:14 version -- scripts/common.sh@368 -- # return 0 00:24:42.456 15:54:14 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:42.456 15:54:14 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:42.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.456 --rc genhtml_branch_coverage=1 00:24:42.456 --rc genhtml_function_coverage=1 00:24:42.456 --rc genhtml_legend=1 00:24:42.456 --rc geninfo_all_blocks=1 00:24:42.456 --rc geninfo_unexecuted_blocks=1 00:24:42.456 00:24:42.456 ' 00:24:42.456 15:54:14 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:42.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.456 --rc genhtml_branch_coverage=1 00:24:42.456 --rc genhtml_function_coverage=1 00:24:42.456 --rc genhtml_legend=1 00:24:42.456 --rc geninfo_all_blocks=1 00:24:42.456 --rc geninfo_unexecuted_blocks=1 00:24:42.456 00:24:42.456 ' 00:24:42.456 15:54:14 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:42.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.456 --rc genhtml_branch_coverage=1 00:24:42.456 --rc genhtml_function_coverage=1 00:24:42.456 --rc genhtml_legend=1 00:24:42.456 --rc geninfo_all_blocks=1 00:24:42.456 --rc geninfo_unexecuted_blocks=1 00:24:42.456 00:24:42.456 ' 00:24:42.456 15:54:14 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:42.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.456 --rc genhtml_branch_coverage=1 00:24:42.456 --rc genhtml_function_coverage=1 00:24:42.456 --rc genhtml_legend=1 00:24:42.456 --rc geninfo_all_blocks=1 00:24:42.456 --rc geninfo_unexecuted_blocks=1 00:24:42.456 00:24:42.456 ' 00:24:42.456 15:54:14 version -- app/version.sh@17 -- # get_header_version major 00:24:42.456 15:54:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:24:42.456 15:54:14 version -- app/version.sh@14 -- # tr -d '"' 00:24:42.456 15:54:14 version -- app/version.sh@14 -- # cut -f2 00:24:42.456 15:54:14 version -- app/version.sh@17 -- # major=25 00:24:42.456 15:54:14 version -- app/version.sh@18 -- # get_header_version minor 00:24:42.456 15:54:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:24:42.456 15:54:14 version -- app/version.sh@14 -- # cut -f2 00:24:42.456 15:54:14 version -- app/version.sh@14 -- # tr -d '"' 00:24:42.456 15:54:14 version -- app/version.sh@18 -- # minor=1 00:24:42.456 15:54:14 version -- app/version.sh@19 -- # get_header_version patch 00:24:42.456 15:54:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:24:42.456 15:54:14 version -- app/version.sh@14 -- # cut -f2 00:24:42.456 15:54:14 version -- app/version.sh@14 -- # tr -d '"' 00:24:42.456 15:54:14 version -- app/version.sh@19 -- # patch=0 00:24:42.456 15:54:14 version -- app/version.sh@20 -- # get_header_version suffix 00:24:42.456 15:54:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:24:42.456 15:54:14 version -- app/version.sh@14 -- # cut -f2 00:24:42.456 15:54:14 version -- app/version.sh@14 -- # tr -d '"' 00:24:42.456 15:54:14 version -- app/version.sh@20 -- # suffix=-pre 00:24:42.456 15:54:14 version -- app/version.sh@22 -- # version=25.1 00:24:42.456 15:54:14 version -- app/version.sh@25 -- # (( patch != 0 )) 00:24:42.456 15:54:14 version -- app/version.sh@28 -- # version=25.1rc0 00:24:42.456 15:54:14 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:24:42.456 15:54:14 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:24:42.456 15:54:14 version -- app/version.sh@30 -- # py_version=25.1rc0 00:24:42.456 15:54:14 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:24:42.456 ************************************ 00:24:42.456 END TEST version 00:24:42.456 ************************************ 00:24:42.456 00:24:42.456 real 0m0.180s 00:24:42.456 user 0m0.116s 00:24:42.456 sys 0m0.094s 00:24:42.456 15:54:14 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:42.456 15:54:14 version -- common/autotest_common.sh@10 -- # set +x 00:24:42.456 15:54:14 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:24:42.456 15:54:14 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:24:42.456 15:54:14 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:24:42.456 15:54:14 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:42.456 15:54:14 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:42.456 15:54:14 -- common/autotest_common.sh@10 -- # set +x 00:24:42.456 ************************************ 00:24:42.456 START TEST bdev_raid 00:24:42.456 ************************************ 00:24:42.456 15:54:14 bdev_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:24:42.456 * Looking for test storage... 00:24:42.456 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:24:42.456 15:54:14 bdev_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:42.456 15:54:14 bdev_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:42.456 15:54:14 bdev_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:24:42.744 15:54:14 bdev_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:42.744 15:54:14 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:42.744 15:54:14 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:42.744 15:54:14 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:42.744 15:54:14 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:24:42.744 15:54:14 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:24:42.744 15:54:14 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:24:42.744 15:54:14 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:24:42.744 15:54:14 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:24:42.744 15:54:14 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:24:42.744 15:54:14 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:24:42.744 15:54:14 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:42.744 15:54:14 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:24:42.744 15:54:14 bdev_raid -- scripts/common.sh@345 -- # : 1 00:24:42.744 15:54:14 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:42.744 15:54:14 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:42.744 15:54:14 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:24:42.744 15:54:14 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:24:42.744 15:54:14 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:42.744 15:54:14 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:24:42.744 15:54:14 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:24:42.744 15:54:14 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:24:42.744 15:54:14 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:24:42.744 15:54:14 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:42.745 15:54:14 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:24:42.745 15:54:14 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:24:42.745 15:54:14 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:42.745 15:54:14 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:42.745 15:54:14 bdev_raid -- scripts/common.sh@368 -- # return 0 00:24:42.745 15:54:14 bdev_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:42.745 15:54:14 bdev_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:42.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.745 --rc genhtml_branch_coverage=1 00:24:42.745 --rc genhtml_function_coverage=1 00:24:42.745 --rc genhtml_legend=1 00:24:42.745 --rc geninfo_all_blocks=1 00:24:42.745 --rc geninfo_unexecuted_blocks=1 00:24:42.745 00:24:42.745 ' 00:24:42.745 15:54:14 bdev_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:42.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.745 --rc genhtml_branch_coverage=1 00:24:42.745 --rc genhtml_function_coverage=1 00:24:42.745 --rc genhtml_legend=1 00:24:42.745 --rc geninfo_all_blocks=1 00:24:42.745 --rc geninfo_unexecuted_blocks=1 00:24:42.745 00:24:42.745 ' 00:24:42.745 15:54:14 bdev_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:42.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.745 --rc genhtml_branch_coverage=1 00:24:42.745 --rc genhtml_function_coverage=1 00:24:42.745 --rc genhtml_legend=1 00:24:42.745 --rc geninfo_all_blocks=1 00:24:42.745 --rc geninfo_unexecuted_blocks=1 00:24:42.745 00:24:42.745 ' 00:24:42.745 15:54:14 bdev_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:42.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.745 --rc genhtml_branch_coverage=1 00:24:42.745 --rc genhtml_function_coverage=1 00:24:42.745 --rc genhtml_legend=1 00:24:42.745 --rc geninfo_all_blocks=1 00:24:42.745 --rc geninfo_unexecuted_blocks=1 00:24:42.745 00:24:42.745 ' 00:24:42.745 15:54:14 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:24:42.745 15:54:14 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:24:42.745 15:54:14 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:24:42.745 15:54:14 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:24:42.745 15:54:14 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:24:42.745 15:54:14 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:24:42.745 15:54:14 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:24:42.745 15:54:14 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:42.745 15:54:14 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:42.745 15:54:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:42.745 ************************************ 00:24:42.745 START TEST raid1_resize_data_offset_test 00:24:42.745 ************************************ 00:24:42.745 Process raid pid: 58740 00:24:42.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.745 15:54:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1127 -- # raid_resize_data_offset_test 00:24:42.745 15:54:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=58740 00:24:42.745 15:54:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 58740' 00:24:42.745 15:54:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 58740 00:24:42.745 15:54:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@833 -- # '[' -z 58740 ']' 00:24:42.745 15:54:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.745 15:54:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:42.745 15:54:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.745 15:54:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:42.745 15:54:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:24:42.745 15:54:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:24:42.745 [2024-11-05 15:54:14.982015] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:24:42.745 [2024-11-05 15:54:14.982125] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:43.003 [2024-11-05 15:54:15.148191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.003 [2024-11-05 15:54:15.250450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.003 [2024-11-05 15:54:15.389812] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:43.003 [2024-11-05 15:54:15.389857] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:43.667 15:54:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:43.667 15:54:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@866 -- # return 0 00:24:43.667 15:54:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:24:43.667 15:54:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.667 15:54:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.667 malloc0 00:24:43.667 15:54:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.667 15:54:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:24:43.667 15:54:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.667 15:54:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.667 malloc1 00:24:43.667 15:54:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.667 15:54:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:24:43.667 15:54:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.667 15:54:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.667 null0 00:24:43.667 15:54:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.667 15:54:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:24:43.667 15:54:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.667 15:54:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.667 [2024-11-05 15:54:15.939535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:24:43.667 [2024-11-05 15:54:15.941450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:43.667 [2024-11-05 15:54:15.941581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:24:43.667 [2024-11-05 15:54:15.941734] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:43.667 [2024-11-05 15:54:15.941826] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:24:43.667 [2024-11-05 15:54:15.942300] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:24:43.667 [2024-11-05 15:54:15.942526] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:43.667 [2024-11-05 15:54:15.942605] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:24:43.667 [2024-11-05 15:54:15.942813] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:43.668 15:54:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.668 15:54:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:24:43.668 15:54:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:43.668 15:54:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.668 15:54:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.668 15:54:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.668 15:54:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:24:43.668 15:54:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:24:43.668 15:54:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.668 15:54:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.668 [2024-11-05 15:54:15.979551] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:24:43.668 15:54:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.668 15:54:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:24:43.668 15:54:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.668 15:54:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.232 malloc2 00:24:44.233 15:54:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.233 15:54:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:24:44.233 15:54:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.233 15:54:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.233 [2024-11-05 15:54:16.349309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:44.233 [2024-11-05 15:54:16.361615] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:44.233 15:54:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.233 [2024-11-05 15:54:16.363503] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:24:44.233 15:54:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:44.233 15:54:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:24:44.233 15:54:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.233 15:54:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.233 15:54:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.233 15:54:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:24:44.233 15:54:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 58740 00:24:44.233 15:54:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@952 -- # '[' -z 58740 ']' 00:24:44.233 15:54:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # kill -0 58740 00:24:44.233 15:54:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # uname 00:24:44.233 15:54:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:44.233 15:54:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58740 00:24:44.233 killing process with pid 58740 00:24:44.233 15:54:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:44.233 15:54:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:44.233 15:54:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58740' 00:24:44.233 15:54:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@971 -- # kill 58740 00:24:44.233 15:54:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@976 -- # wait 58740 00:24:44.233 [2024-11-05 15:54:16.412042] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:44.233 [2024-11-05 15:54:16.413052] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:24:44.233 [2024-11-05 15:54:16.413104] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:44.233 [2024-11-05 15:54:16.413120] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:24:44.233 [2024-11-05 15:54:16.436266] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:44.233 [2024-11-05 15:54:16.436557] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:44.233 [2024-11-05 15:54:16.436571] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:24:45.165 [2024-11-05 15:54:17.558758] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:46.100 ************************************ 00:24:46.100 END TEST raid1_resize_data_offset_test 00:24:46.100 ************************************ 00:24:46.100 15:54:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:24:46.100 00:24:46.100 real 0m3.325s 00:24:46.100 user 0m3.282s 00:24:46.100 sys 0m0.391s 00:24:46.100 15:54:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:46.100 15:54:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:24:46.100 15:54:18 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:24:46.100 15:54:18 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:46.100 15:54:18 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:46.100 15:54:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:46.100 ************************************ 00:24:46.100 START TEST raid0_resize_superblock_test 00:24:46.100 ************************************ 00:24:46.100 15:54:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 0 00:24:46.100 15:54:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:24:46.100 15:54:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=58813 00:24:46.100 15:54:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 58813' 00:24:46.100 Process raid pid: 58813 00:24:46.100 15:54:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 58813 00:24:46.100 15:54:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:24:46.100 15:54:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 58813 ']' 00:24:46.100 15:54:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.100 15:54:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:46.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.100 15:54:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.100 15:54:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:46.100 15:54:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:46.100 [2024-11-05 15:54:18.347126] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:24:46.100 [2024-11-05 15:54:18.347389] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:46.100 [2024-11-05 15:54:18.503277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.358 [2024-11-05 15:54:18.586395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.358 [2024-11-05 15:54:18.694055] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:46.358 [2024-11-05 15:54:18.694086] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:46.924 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:46.924 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:24:46.924 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:24:46.924 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.924 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:47.182 malloc0 00:24:47.182 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.182 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:24:47.182 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.182 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:47.182 [2024-11-05 15:54:19.578559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:24:47.182 [2024-11-05 15:54:19.578609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:47.182 [2024-11-05 15:54:19.578627] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:47.182 [2024-11-05 15:54:19.578637] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:47.182 [2024-11-05 15:54:19.580377] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:47.182 [2024-11-05 15:54:19.580407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:24:47.182 pt0 00:24:47.182 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.182 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:24:47.182 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.182 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:47.438 08172bd8-ea07-4dc0-b3e7-560d0c197ce1 00:24:47.438 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.438 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:24:47.438 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.438 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:47.438 c1e25374-47c0-4592-8ddc-927b54cc0237 00:24:47.438 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.438 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:24:47.438 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.438 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:47.438 013d27ea-bb30-4285-b708-0688e1a67882 00:24:47.438 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.438 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:24:47.438 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:24:47.438 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.438 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:47.438 [2024-11-05 15:54:19.667436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c1e25374-47c0-4592-8ddc-927b54cc0237 is claimed 00:24:47.438 [2024-11-05 15:54:19.667503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 013d27ea-bb30-4285-b708-0688e1a67882 is claimed 00:24:47.438 [2024-11-05 15:54:19.667606] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:47.438 [2024-11-05 15:54:19.667618] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:24:47.438 [2024-11-05 15:54:19.667814] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:47.438 [2024-11-05 15:54:19.667963] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:47.438 [2024-11-05 15:54:19.667971] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:24:47.438 [2024-11-05 15:54:19.668084] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:47.438 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.438 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:24:47.438 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:24:47.438 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.438 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:47.438 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.438 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:24:47.438 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:24:47.438 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:24:47.438 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.438 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:47.438 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.438 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:24:47.439 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:24:47.439 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:24:47.439 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.439 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:47.439 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:24:47.439 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:24:47.439 [2024-11-05 15:54:19.735632] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:47.439 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.439 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:24:47.439 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:24:47.439 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:24:47.439 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:24:47.439 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.439 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:47.439 [2024-11-05 15:54:19.767600] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:24:47.439 [2024-11-05 15:54:19.767620] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'c1e25374-47c0-4592-8ddc-927b54cc0237' was resized: old size 131072, new size 204800 00:24:47.439 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.439 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:24:47.439 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.439 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:47.439 [2024-11-05 15:54:19.775531] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:24:47.439 [2024-11-05 15:54:19.775549] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '013d27ea-bb30-4285-b708-0688e1a67882' was resized: old size 131072, new size 204800 00:24:47.439 [2024-11-05 15:54:19.775571] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:24:47.439 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.439 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:24:47.439 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:24:47.439 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.439 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:47.439 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.439 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:24:47.439 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:24:47.439 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:24:47.439 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.439 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:47.439 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.439 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:24:47.439 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:24:47.439 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:24:47.439 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.439 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:47.439 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:24:47.439 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:24:47.439 [2024-11-05 15:54:19.847642] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:47.696 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.696 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:24:47.696 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:24:47.696 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:24:47.696 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:24:47.696 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.696 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:47.696 [2024-11-05 15:54:19.875465] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:24:47.696 [2024-11-05 15:54:19.875519] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:24:47.696 [2024-11-05 15:54:19.875528] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:47.696 [2024-11-05 15:54:19.875541] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:24:47.696 [2024-11-05 15:54:19.875626] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:47.696 [2024-11-05 15:54:19.875653] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:47.696 [2024-11-05 15:54:19.875663] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:24:47.696 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.696 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:24:47.696 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.696 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:47.696 [2024-11-05 15:54:19.883419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:24:47.696 [2024-11-05 15:54:19.883460] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:47.696 [2024-11-05 15:54:19.883474] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:24:47.696 [2024-11-05 15:54:19.883483] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:47.696 [2024-11-05 15:54:19.885194] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:47.696 [2024-11-05 15:54:19.885222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:24:47.696 [2024-11-05 15:54:19.886472] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev c1e25374-47c0-4592-8ddc-927b54cc0237 00:24:47.696 [2024-11-05 15:54:19.886604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c1e25374-47c0-4592-8ddc-927b54cc0237 is claimed 00:24:47.696 [2024-11-05 15:54:19.886698] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 013d27ea-bb30-4285-b708-0688e1a67882 00:24:47.696 [2024-11-05 15:54:19.886714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 013d27ea-bb30-4285-b708-0688e1a67882 is claimed 00:24:47.696 [2024-11-05 15:54:19.886804] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 013d27ea-bb30-4285-b708-0688e1a67882 (2) smaller than existing raid bdev Raid (3) 00:24:47.696 [2024-11-05 15:54:19.886819] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev c1e25374-47c0-4592-8ddc-927b54cc0237: File exists 00:24:47.696 [2024-11-05 15:54:19.886868] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:47.696 [2024-11-05 15:54:19.886877] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:24:47.696 [2024-11-05 15:54:19.887065] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:24:47.696 [2024-11-05 15:54:19.887170] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:47.696 [2024-11-05 15:54:19.887176] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:24:47.696 [2024-11-05 15:54:19.887286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:47.696 pt0 00:24:47.696 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.696 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:24:47.696 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.696 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:47.696 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.696 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:24:47.696 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:24:47.696 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:24:47.696 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:24:47.696 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.696 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:47.696 [2024-11-05 15:54:19.903636] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:47.696 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.696 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:24:47.696 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:24:47.696 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:24:47.696 15:54:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 58813 00:24:47.696 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 58813 ']' 00:24:47.697 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 58813 00:24:47.697 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:24:47.697 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:47.697 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58813 00:24:47.697 killing process with pid 58813 00:24:47.697 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:47.697 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:47.697 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58813' 00:24:47.697 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 58813 00:24:47.697 [2024-11-05 15:54:19.953032] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:47.697 [2024-11-05 15:54:19.953077] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:47.697 [2024-11-05 15:54:19.953110] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:47.697 [2024-11-05 15:54:19.953116] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:24:47.697 15:54:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 58813 00:24:48.262 [2024-11-05 15:54:20.659103] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:48.834 15:54:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:24:48.834 00:24:48.834 real 0m2.935s 00:24:48.834 user 0m3.260s 00:24:48.834 sys 0m0.355s 00:24:48.834 15:54:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:48.834 ************************************ 00:24:48.834 END TEST raid0_resize_superblock_test 00:24:48.834 ************************************ 00:24:48.834 15:54:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.091 15:54:21 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:24:49.091 15:54:21 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:49.091 15:54:21 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:49.091 15:54:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:49.091 ************************************ 00:24:49.091 START TEST raid1_resize_superblock_test 00:24:49.091 ************************************ 00:24:49.091 15:54:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 1 00:24:49.091 15:54:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:24:49.091 15:54:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=58895 00:24:49.092 15:54:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 58895' 00:24:49.092 15:54:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:24:49.092 Process raid pid: 58895 00:24:49.092 15:54:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 58895 00:24:49.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:49.092 15:54:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 58895 ']' 00:24:49.092 15:54:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:49.092 15:54:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:49.092 15:54:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:49.092 15:54:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:49.092 15:54:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.092 [2024-11-05 15:54:21.319727] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:24:49.092 [2024-11-05 15:54:21.319859] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:49.092 [2024-11-05 15:54:21.478280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.349 [2024-11-05 15:54:21.574626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:49.349 [2024-11-05 15:54:21.710832] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:49.349 [2024-11-05 15:54:21.710872] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:49.913 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:49.913 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:24:49.913 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:24:49.913 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.913 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:50.171 malloc0 00:24:50.171 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.171 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:24:50.171 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.171 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:50.171 [2024-11-05 15:54:22.562045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:24:50.171 [2024-11-05 15:54:22.562113] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:50.171 [2024-11-05 15:54:22.562137] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:50.171 [2024-11-05 15:54:22.562150] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:50.171 [2024-11-05 15:54:22.564344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:50.171 [2024-11-05 15:54:22.564493] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:24:50.171 pt0 00:24:50.171 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.171 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:24:50.171 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.171 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:50.429 591c32d7-5d3f-4ab0-b04d-c33758dc08fd 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:50.429 70fd3eb0-14c8-42c9-9ad3-d8915521a60c 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:50.429 88a0d181-1e2a-43dd-a12a-f8e9da8d8377 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:50.429 [2024-11-05 15:54:22.651684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 70fd3eb0-14c8-42c9-9ad3-d8915521a60c is claimed 00:24:50.429 [2024-11-05 15:54:22.651924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 88a0d181-1e2a-43dd-a12a-f8e9da8d8377 is claimed 00:24:50.429 [2024-11-05 15:54:22.652071] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:50.429 [2024-11-05 15:54:22.652087] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:24:50.429 [2024-11-05 15:54:22.652346] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:50.429 [2024-11-05 15:54:22.652516] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:50.429 [2024-11-05 15:54:22.652525] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:24:50.429 [2024-11-05 15:54:22.652673] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:50.429 [2024-11-05 15:54:22.723951] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:50.429 [2024-11-05 15:54:22.755885] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:24:50.429 [2024-11-05 15:54:22.755909] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '70fd3eb0-14c8-42c9-9ad3-d8915521a60c' was resized: old size 131072, new size 204800 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:50.429 [2024-11-05 15:54:22.763809] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:24:50.429 [2024-11-05 15:54:22.763927] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '88a0d181-1e2a-43dd-a12a-f8e9da8d8377' was resized: old size 131072, new size 204800 00:24:50.429 [2024-11-05 15:54:22.763957] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:50.429 [2024-11-05 15:54:22.831971] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:50.429 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.688 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:24:50.688 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:24:50.688 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:24:50.688 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:24:50.688 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.688 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:50.688 [2024-11-05 15:54:22.859738] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:24:50.688 [2024-11-05 15:54:22.859907] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:24:50.688 [2024-11-05 15:54:22.859937] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:24:50.689 [2024-11-05 15:54:22.860077] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:50.689 [2024-11-05 15:54:22.860245] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:50.689 [2024-11-05 15:54:22.860307] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:50.689 [2024-11-05 15:54:22.860319] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:24:50.689 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.689 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:24:50.689 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.689 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:50.689 [2024-11-05 15:54:22.867669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:24:50.689 [2024-11-05 15:54:22.867722] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:50.689 [2024-11-05 15:54:22.867741] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:24:50.689 [2024-11-05 15:54:22.867756] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:50.689 [2024-11-05 15:54:22.869960] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:50.689 [2024-11-05 15:54:22.870077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:24:50.689 pt0 00:24:50.689 [2024-11-05 15:54:22.871640] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 70fd3eb0-14c8-42c9-9ad3-d8915521a60c 00:24:50.689 [2024-11-05 15:54:22.871697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 70fd3eb0-14c8-42c9-9ad3-d8915521a60c is claimed 00:24:50.689 [2024-11-05 15:54:22.871796] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 88a0d181-1e2a-43dd-a12a-f8e9da8d8377 00:24:50.689 [2024-11-05 15:54:22.871814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 88a0d181-1e2a-43dd-a12a-f8e9da8d8377 is claimed 00:24:50.689 [2024-11-05 15:54:22.871968] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 88a0d181-1e2a-43dd-a12a-f8e9da8d8377 (2) smaller than existing raid bdev Raid (3) 00:24:50.689 [2024-11-05 15:54:22.871987] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 70fd3eb0-14c8-42c9-9ad3-d8915521a60c: File exists 00:24:50.689 [2024-11-05 15:54:22.872022] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:50.689 [2024-11-05 15:54:22.872032] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:24:50.689 [2024-11-05 15:54:22.872259] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:24:50.689 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.689 [2024-11-05 15:54:22.872397] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:50.689 [2024-11-05 15:54:22.872405] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:24:50.689 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:24:50.689 [2024-11-05 15:54:22.872540] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:50.689 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.689 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:50.689 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.689 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:24:50.689 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:24:50.689 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.689 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:50.689 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:24:50.689 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:24:50.689 [2024-11-05 15:54:22.887976] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:50.689 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.689 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:24:50.689 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:24:50.689 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:24:50.689 15:54:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 58895 00:24:50.689 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 58895 ']' 00:24:50.689 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 58895 00:24:50.689 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:24:50.689 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:50.689 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58895 00:24:50.689 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:50.689 killing process with pid 58895 00:24:50.689 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:50.689 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58895' 00:24:50.689 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 58895 00:24:50.689 [2024-11-05 15:54:22.941721] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:50.689 [2024-11-05 15:54:22.941815] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:50.689 15:54:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 58895 00:24:50.689 [2024-11-05 15:54:22.941881] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:50.689 [2024-11-05 15:54:22.941891] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:24:51.622 [2024-11-05 15:54:23.809943] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:52.189 ************************************ 00:24:52.189 END TEST raid1_resize_superblock_test 00:24:52.189 ************************************ 00:24:52.189 15:54:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:24:52.189 00:24:52.189 real 0m3.249s 00:24:52.189 user 0m3.454s 00:24:52.189 sys 0m0.392s 00:24:52.189 15:54:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:52.189 15:54:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:52.189 15:54:24 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:24:52.189 15:54:24 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:24:52.189 15:54:24 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:24:52.189 15:54:24 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:24:52.189 15:54:24 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:24:52.189 15:54:24 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:24:52.189 15:54:24 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:52.189 15:54:24 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:52.189 15:54:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:52.189 ************************************ 00:24:52.189 START TEST raid_function_test_raid0 00:24:52.189 ************************************ 00:24:52.189 15:54:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1127 -- # raid_function_test raid0 00:24:52.189 15:54:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:24:52.189 15:54:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:24:52.189 15:54:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:24:52.189 Process raid pid: 58981 00:24:52.189 15:54:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=58981 00:24:52.189 15:54:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 58981' 00:24:52.189 15:54:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 58981 00:24:52.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.189 15:54:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@833 -- # '[' -z 58981 ']' 00:24:52.189 15:54:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.189 15:54:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:52.189 15:54:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.189 15:54:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:52.189 15:54:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:24:52.189 15:54:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:24:52.447 [2024-11-05 15:54:24.612578] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:24:52.447 [2024-11-05 15:54:24.612952] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:52.447 [2024-11-05 15:54:24.765394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.705 [2024-11-05 15:54:24.864290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:52.705 [2024-11-05 15:54:25.000915] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:52.705 [2024-11-05 15:54:25.000948] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:53.269 15:54:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:53.269 15:54:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@866 -- # return 0 00:24:53.269 15:54:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:24:53.269 15:54:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.269 15:54:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:24:53.269 Base_1 00:24:53.270 15:54:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.270 15:54:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:24:53.270 15:54:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.270 15:54:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:24:53.270 Base_2 00:24:53.270 15:54:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.270 15:54:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:24:53.270 15:54:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.270 15:54:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:24:53.270 [2024-11-05 15:54:25.532815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:24:53.270 [2024-11-05 15:54:25.534634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:24:53.270 [2024-11-05 15:54:25.534695] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:53.270 [2024-11-05 15:54:25.534706] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:24:53.270 [2024-11-05 15:54:25.534967] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:53.270 [2024-11-05 15:54:25.535091] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:53.270 [2024-11-05 15:54:25.535100] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:24:53.270 [2024-11-05 15:54:25.535229] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:53.270 15:54:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.270 15:54:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:24:53.270 15:54:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:24:53.270 15:54:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.270 15:54:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:24:53.270 15:54:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.270 15:54:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:24:53.270 15:54:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:24:53.270 15:54:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:24:53.270 15:54:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:53.270 15:54:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:24:53.270 15:54:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:53.270 15:54:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:53.270 15:54:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:53.270 15:54:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:24:53.270 15:54:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:53.270 15:54:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:53.270 15:54:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:24:53.527 [2024-11-05 15:54:25.764901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:24:53.527 /dev/nbd0 00:24:53.527 15:54:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:53.527 15:54:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:53.527 15:54:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:24:53.527 15:54:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # local i 00:24:53.527 15:54:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:53.527 15:54:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:53.527 15:54:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:24:53.527 15:54:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # break 00:24:53.527 15:54:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:53.527 15:54:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:53.527 15:54:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:53.527 1+0 records in 00:24:53.527 1+0 records out 00:24:53.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245276 s, 16.7 MB/s 00:24:53.527 15:54:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:53.527 15:54:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # size=4096 00:24:53.527 15:54:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:53.527 15:54:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:53.527 15:54:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # return 0 00:24:53.527 15:54:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:53.527 15:54:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:53.527 15:54:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:24:53.527 15:54:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:24:53.527 15:54:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:24:53.785 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:24:53.785 { 00:24:53.785 "nbd_device": "/dev/nbd0", 00:24:53.785 "bdev_name": "raid" 00:24:53.785 } 00:24:53.785 ]' 00:24:53.785 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:24:53.785 { 00:24:53.785 "nbd_device": "/dev/nbd0", 00:24:53.785 "bdev_name": "raid" 00:24:53.785 } 00:24:53.785 ]' 00:24:53.785 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:53.785 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:24:53.785 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:24:53.785 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:53.785 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:24:53.785 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:24:53.785 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:24:53.785 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:24:53.785 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:24:53.785 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:24:53.785 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:24:53.785 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:24:53.785 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:24:53.785 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:24:53.785 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:24:53.785 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:24:53.785 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:24:53.785 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:24:53.785 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:24:53.785 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:24:53.785 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:24:53.785 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:24:53.785 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:24:53.785 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:24:53.785 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:24:53.785 4096+0 records in 00:24:53.785 4096+0 records out 00:24:53.785 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.020759 s, 101 MB/s 00:24:53.785 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:24:54.044 4096+0 records in 00:24:54.044 4096+0 records out 00:24:54.044 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.216739 s, 9.7 MB/s 00:24:54.044 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:24:54.044 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:24:54.044 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:24:54.044 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:24:54.044 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:24:54.044 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:24:54.044 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:24:54.044 128+0 records in 00:24:54.044 128+0 records out 00:24:54.044 65536 bytes (66 kB, 64 KiB) copied, 0.000847672 s, 77.3 MB/s 00:24:54.044 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:24:54.044 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:24:54.044 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:24:54.044 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:24:54.044 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:24:54.044 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:24:54.044 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:24:54.044 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:24:54.044 2035+0 records in 00:24:54.044 2035+0 records out 00:24:54.044 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00614385 s, 170 MB/s 00:24:54.044 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:24:54.044 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:24:54.044 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:24:54.044 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:24:54.044 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:24:54.044 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:24:54.044 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:24:54.044 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:24:54.044 456+0 records in 00:24:54.044 456+0 records out 00:24:54.044 233472 bytes (233 kB, 228 KiB) copied, 0.00268036 s, 87.1 MB/s 00:24:54.044 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:24:54.044 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:24:54.044 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:24:54.044 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:24:54.044 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:24:54.044 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:24:54.044 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:24:54.044 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:54.044 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:54.044 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:54.044 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:24:54.044 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:54.044 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:54.302 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:54.302 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:54.302 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:54.302 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:54.302 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:54.302 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:54.302 [2024-11-05 15:54:26.580689] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:54.302 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:24:54.302 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:24:54.302 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:24:54.302 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:24:54.302 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:24:54.560 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:24:54.560 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:24:54.560 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:54.560 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:24:54.560 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:54.560 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:24:54.560 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:24:54.560 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:24:54.560 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:24:54.560 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:24:54.560 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:24:54.560 15:54:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 58981 00:24:54.560 15:54:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # '[' -z 58981 ']' 00:24:54.560 15:54:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # kill -0 58981 00:24:54.560 15:54:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # uname 00:24:54.560 15:54:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:54.560 15:54:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58981 00:24:54.560 15:54:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:54.560 killing process with pid 58981 00:24:54.560 15:54:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:54.560 15:54:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58981' 00:24:54.560 15:54:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@971 -- # kill 58981 00:24:54.560 [2024-11-05 15:54:26.861968] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:54.560 15:54:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@976 -- # wait 58981 00:24:54.560 [2024-11-05 15:54:26.862053] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:54.560 [2024-11-05 15:54:26.862099] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:54.560 [2024-11-05 15:54:26.862112] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:24:54.817 [2024-11-05 15:54:26.987544] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:55.381 15:54:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:24:55.381 00:24:55.381 real 0m2.987s 00:24:55.381 user 0m3.697s 00:24:55.381 sys 0m0.666s 00:24:55.381 15:54:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:55.381 15:54:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:24:55.381 ************************************ 00:24:55.381 END TEST raid_function_test_raid0 00:24:55.381 ************************************ 00:24:55.381 15:54:27 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:24:55.381 15:54:27 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:55.381 15:54:27 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:55.381 15:54:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:55.381 ************************************ 00:24:55.381 START TEST raid_function_test_concat 00:24:55.381 ************************************ 00:24:55.381 15:54:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1127 -- # raid_function_test concat 00:24:55.381 15:54:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:24:55.381 15:54:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:24:55.381 15:54:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:24:55.381 15:54:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=59099 00:24:55.381 Process raid pid: 59099 00:24:55.381 15:54:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 59099' 00:24:55.381 15:54:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 59099 00:24:55.381 15:54:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@833 -- # '[' -z 59099 ']' 00:24:55.381 15:54:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.381 15:54:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:55.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.381 15:54:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.381 15:54:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:55.381 15:54:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:24:55.381 15:54:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:24:55.381 [2024-11-05 15:54:27.647407] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:24:55.381 [2024-11-05 15:54:27.647516] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:55.639 [2024-11-05 15:54:27.798358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.639 [2024-11-05 15:54:27.893212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.639 [2024-11-05 15:54:28.028289] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:55.639 [2024-11-05 15:54:28.028323] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:56.212 15:54:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:56.212 15:54:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@866 -- # return 0 00:24:56.212 15:54:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:24:56.212 15:54:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.212 15:54:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:24:56.212 Base_1 00:24:56.212 15:54:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.212 15:54:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:24:56.212 15:54:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.212 15:54:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:24:56.212 Base_2 00:24:56.212 15:54:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.212 15:54:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:24:56.212 15:54:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.212 15:54:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:24:56.212 [2024-11-05 15:54:28.564394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:24:56.212 [2024-11-05 15:54:28.566232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:24:56.212 [2024-11-05 15:54:28.566305] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:56.212 [2024-11-05 15:54:28.566317] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:24:56.212 [2024-11-05 15:54:28.566586] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:56.212 [2024-11-05 15:54:28.566718] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:56.212 [2024-11-05 15:54:28.566731] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:24:56.212 [2024-11-05 15:54:28.566882] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:56.212 15:54:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.212 15:54:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:24:56.212 15:54:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.212 15:54:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:24:56.212 15:54:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:24:56.212 15:54:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.212 15:54:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:24:56.212 15:54:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:24:56.212 15:54:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:24:56.212 15:54:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:56.212 15:54:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:24:56.212 15:54:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:56.212 15:54:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:56.212 15:54:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:56.212 15:54:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:24:56.212 15:54:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:56.212 15:54:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:56.212 15:54:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:24:56.488 [2024-11-05 15:54:28.784479] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:24:56.488 /dev/nbd0 00:24:56.488 15:54:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:56.488 15:54:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:56.488 15:54:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:24:56.488 15:54:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # local i 00:24:56.488 15:54:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:56.488 15:54:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:56.488 15:54:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:24:56.488 15:54:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # break 00:24:56.488 15:54:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:56.488 15:54:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:56.488 15:54:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:56.488 1+0 records in 00:24:56.488 1+0 records out 00:24:56.488 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181926 s, 22.5 MB/s 00:24:56.488 15:54:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:56.488 15:54:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # size=4096 00:24:56.488 15:54:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:56.488 15:54:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:56.488 15:54:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # return 0 00:24:56.488 15:54:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:56.488 15:54:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:56.488 15:54:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:24:56.488 15:54:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:24:56.488 15:54:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:24:56.746 15:54:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:24:56.746 { 00:24:56.746 "nbd_device": "/dev/nbd0", 00:24:56.746 "bdev_name": "raid" 00:24:56.746 } 00:24:56.746 ]' 00:24:56.746 15:54:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:24:56.746 { 00:24:56.746 "nbd_device": "/dev/nbd0", 00:24:56.746 "bdev_name": "raid" 00:24:56.746 } 00:24:56.746 ]' 00:24:56.746 15:54:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:56.746 15:54:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:24:56.746 15:54:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:56.746 15:54:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:24:56.746 15:54:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:24:56.746 15:54:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:24:56.746 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:24:56.746 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:24:56.746 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:24:56.746 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:24:56.746 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:24:56.746 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:24:56.746 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:24:56.746 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:24:56.746 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:24:56.746 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:24:56.746 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:24:56.746 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:24:56.746 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:24:56.746 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:24:56.746 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:24:56.746 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:24:56.746 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:24:56.746 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:24:56.746 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:24:56.746 4096+0 records in 00:24:56.746 4096+0 records out 00:24:56.746 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0238742 s, 87.8 MB/s 00:24:56.746 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:24:57.004 4096+0 records in 00:24:57.004 4096+0 records out 00:24:57.004 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.211714 s, 9.9 MB/s 00:24:57.004 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:24:57.004 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:24:57.004 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:24:57.004 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:24:57.004 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:24:57.004 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:24:57.004 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:24:57.004 128+0 records in 00:24:57.004 128+0 records out 00:24:57.004 65536 bytes (66 kB, 64 KiB) copied, 0.000279275 s, 235 MB/s 00:24:57.004 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:24:57.004 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:24:57.004 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:24:57.004 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:24:57.004 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:24:57.004 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:24:57.004 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:24:57.004 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:24:57.004 2035+0 records in 00:24:57.004 2035+0 records out 00:24:57.004 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00813506 s, 128 MB/s 00:24:57.004 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:24:57.004 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:24:57.004 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:24:57.004 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:24:57.005 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:24:57.005 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:24:57.005 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:24:57.005 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:24:57.005 456+0 records in 00:24:57.005 456+0 records out 00:24:57.005 233472 bytes (233 kB, 228 KiB) copied, 0.00141823 s, 165 MB/s 00:24:57.005 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:24:57.005 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:24:57.005 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:24:57.005 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:24:57.005 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:24:57.005 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:24:57.005 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:24:57.005 15:54:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:57.005 15:54:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:57.005 15:54:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:57.005 15:54:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:24:57.005 15:54:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:57.005 15:54:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:57.262 15:54:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:57.262 [2024-11-05 15:54:29.589960] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:57.262 15:54:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:57.262 15:54:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:57.262 15:54:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:57.262 15:54:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:57.262 15:54:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:57.262 15:54:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:24:57.262 15:54:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:24:57.262 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:24:57.262 15:54:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:24:57.262 15:54:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:24:57.519 15:54:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:24:57.519 15:54:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:57.519 15:54:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:24:57.519 15:54:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:24:57.519 15:54:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:24:57.519 15:54:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:57.519 15:54:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:24:57.519 15:54:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:24:57.519 15:54:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:24:57.519 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:24:57.519 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:24:57.519 15:54:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 59099 00:24:57.519 15:54:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # '[' -z 59099 ']' 00:24:57.519 15:54:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # kill -0 59099 00:24:57.519 15:54:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # uname 00:24:57.519 15:54:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:57.519 15:54:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59099 00:24:57.519 15:54:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:57.519 15:54:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:57.519 killing process with pid 59099 00:24:57.519 15:54:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59099' 00:24:57.519 15:54:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@971 -- # kill 59099 00:24:57.519 [2024-11-05 15:54:29.860060] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:57.519 15:54:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@976 -- # wait 59099 00:24:57.519 [2024-11-05 15:54:29.860146] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:57.519 [2024-11-05 15:54:29.860193] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:57.519 [2024-11-05 15:54:29.860204] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:24:57.777 [2024-11-05 15:54:29.988684] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:58.373 15:54:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:24:58.373 00:24:58.373 real 0m3.107s 00:24:58.373 user 0m3.808s 00:24:58.373 sys 0m0.653s 00:24:58.373 15:54:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:58.374 15:54:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:24:58.374 ************************************ 00:24:58.374 END TEST raid_function_test_concat 00:24:58.374 ************************************ 00:24:58.374 15:54:30 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:24:58.374 15:54:30 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:58.374 15:54:30 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:58.374 15:54:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:58.374 ************************************ 00:24:58.374 START TEST raid0_resize_test 00:24:58.374 ************************************ 00:24:58.374 15:54:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 0 00:24:58.374 15:54:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:24:58.374 15:54:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:24:58.374 15:54:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:24:58.374 15:54:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:24:58.374 15:54:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:24:58.374 15:54:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:24:58.374 15:54:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:24:58.374 15:54:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:24:58.374 15:54:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=59215 00:24:58.374 Process raid pid: 59215 00:24:58.374 15:54:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 59215' 00:24:58.374 15:54:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 59215 00:24:58.374 15:54:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # '[' -z 59215 ']' 00:24:58.374 15:54:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.374 15:54:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:24:58.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.374 15:54:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:58.374 15:54:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.374 15:54:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:58.374 15:54:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.632 [2024-11-05 15:54:30.795603] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:24:58.632 [2024-11-05 15:54:30.795717] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.632 [2024-11-05 15:54:30.951776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.889 [2024-11-05 15:54:31.051855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.889 [2024-11-05 15:54:31.189187] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:58.889 [2024-11-05 15:54:31.189227] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:59.453 15:54:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:59.453 15:54:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@866 -- # return 0 00:24:59.453 15:54:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:24:59.453 15:54:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.453 15:54:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.453 Base_1 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.454 Base_2 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.454 [2024-11-05 15:54:31.658461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:24:59.454 [2024-11-05 15:54:31.660251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:24:59.454 [2024-11-05 15:54:31.660305] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:59.454 [2024-11-05 15:54:31.660316] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:24:59.454 [2024-11-05 15:54:31.660550] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:24:59.454 [2024-11-05 15:54:31.660651] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:59.454 [2024-11-05 15:54:31.660659] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:24:59.454 [2024-11-05 15:54:31.660778] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.454 [2024-11-05 15:54:31.666442] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:24:59.454 [2024-11-05 15:54:31.666469] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:24:59.454 true 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:24:59.454 [2024-11-05 15:54:31.678622] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.454 [2024-11-05 15:54:31.710434] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:24:59.454 [2024-11-05 15:54:31.710459] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:24:59.454 [2024-11-05 15:54:31.710482] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:24:59.454 true 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:24:59.454 [2024-11-05 15:54:31.718625] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 59215 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # '[' -z 59215 ']' 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # kill -0 59215 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # uname 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59215 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:59.454 killing process with pid 59215 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59215' 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@971 -- # kill 59215 00:24:59.454 [2024-11-05 15:54:31.767203] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:59.454 15:54:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@976 -- # wait 59215 00:24:59.454 [2024-11-05 15:54:31.767266] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:59.454 [2024-11-05 15:54:31.767307] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:59.454 [2024-11-05 15:54:31.767316] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:24:59.454 [2024-11-05 15:54:31.778380] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:00.387 15:54:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:25:00.387 00:25:00.387 real 0m1.730s 00:25:00.387 user 0m1.875s 00:25:00.387 sys 0m0.242s 00:25:00.387 ************************************ 00:25:00.387 END TEST raid0_resize_test 00:25:00.387 ************************************ 00:25:00.387 15:54:32 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:00.387 15:54:32 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:25:00.387 15:54:32 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:25:00.387 15:54:32 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:00.387 15:54:32 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:00.387 15:54:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:00.387 ************************************ 00:25:00.387 START TEST raid1_resize_test 00:25:00.387 ************************************ 00:25:00.387 15:54:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 1 00:25:00.387 15:54:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:25:00.387 15:54:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:25:00.387 15:54:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:25:00.387 15:54:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:25:00.387 15:54:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:25:00.387 15:54:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:25:00.387 15:54:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:25:00.387 15:54:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:25:00.387 15:54:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=59271 00:25:00.387 Process raid pid: 59271 00:25:00.387 15:54:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 59271' 00:25:00.387 15:54:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 59271 00:25:00.387 15:54:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@833 -- # '[' -z 59271 ']' 00:25:00.387 15:54:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.387 15:54:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:00.387 15:54:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:00.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.387 15:54:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.387 15:54:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:00.387 15:54:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:25:00.387 [2024-11-05 15:54:32.563714] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:25:00.387 [2024-11-05 15:54:32.563835] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.387 [2024-11-05 15:54:32.723826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.644 [2024-11-05 15:54:32.823313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.644 [2024-11-05 15:54:32.960673] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:00.644 [2024-11-05 15:54:32.960707] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:01.210 15:54:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:01.210 15:54:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@866 -- # return 0 00:25:01.210 15:54:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:25:01.210 15:54:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.210 15:54:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.210 Base_1 00:25:01.210 15:54:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.210 15:54:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:25:01.210 15:54:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.210 15:54:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.210 Base_2 00:25:01.210 15:54:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.210 15:54:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:25:01.210 15:54:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:25:01.210 15:54:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.210 15:54:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.210 [2024-11-05 15:54:33.419231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:25:01.210 [2024-11-05 15:54:33.421020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:25:01.210 [2024-11-05 15:54:33.421077] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:01.210 [2024-11-05 15:54:33.421088] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:25:01.210 [2024-11-05 15:54:33.421333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:25:01.210 [2024-11-05 15:54:33.421450] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:01.210 [2024-11-05 15:54:33.421463] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:25:01.210 [2024-11-05 15:54:33.421592] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:01.210 15:54:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.210 15:54:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:25:01.210 15:54:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.210 15:54:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.210 [2024-11-05 15:54:33.427218] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:25:01.210 [2024-11-05 15:54:33.427246] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:25:01.210 true 00:25:01.210 15:54:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.210 15:54:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:25:01.210 15:54:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:25:01.210 15:54:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.210 15:54:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.210 [2024-11-05 15:54:33.439394] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:01.210 15:54:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.210 15:54:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:25:01.210 15:54:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:25:01.210 15:54:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:25:01.210 15:54:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:25:01.210 15:54:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:25:01.211 15:54:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:25:01.211 15:54:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.211 15:54:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.211 [2024-11-05 15:54:33.463207] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:25:01.211 [2024-11-05 15:54:33.463229] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:25:01.211 [2024-11-05 15:54:33.463253] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:25:01.211 true 00:25:01.211 15:54:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.211 15:54:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:25:01.211 15:54:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:25:01.211 15:54:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.211 15:54:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.211 [2024-11-05 15:54:33.475401] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:01.211 15:54:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.211 15:54:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:25:01.211 15:54:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:25:01.211 15:54:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:25:01.211 15:54:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:25:01.211 15:54:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:25:01.211 15:54:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 59271 00:25:01.211 15:54:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@952 -- # '[' -z 59271 ']' 00:25:01.211 15:54:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # kill -0 59271 00:25:01.211 15:54:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # uname 00:25:01.211 15:54:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:01.211 15:54:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59271 00:25:01.211 15:54:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:01.211 killing process with pid 59271 00:25:01.211 15:54:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:01.211 15:54:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59271' 00:25:01.211 15:54:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@971 -- # kill 59271 00:25:01.211 15:54:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@976 -- # wait 59271 00:25:01.211 [2024-11-05 15:54:33.527373] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:01.211 [2024-11-05 15:54:33.527440] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:01.211 [2024-11-05 15:54:33.527876] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:01.211 [2024-11-05 15:54:33.527897] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:25:01.211 [2024-11-05 15:54:33.538411] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:02.145 15:54:34 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:25:02.145 00:25:02.145 real 0m1.737s 00:25:02.145 user 0m1.883s 00:25:02.145 sys 0m0.247s 00:25:02.145 15:54:34 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:02.145 15:54:34 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:25:02.145 ************************************ 00:25:02.145 END TEST raid1_resize_test 00:25:02.145 ************************************ 00:25:02.145 15:54:34 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:25:02.145 15:54:34 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:25:02.145 15:54:34 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:25:02.145 15:54:34 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:25:02.145 15:54:34 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:02.145 15:54:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:02.145 ************************************ 00:25:02.145 START TEST raid_state_function_test 00:25:02.145 ************************************ 00:25:02.145 15:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 false 00:25:02.145 15:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:25:02.145 15:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:25:02.145 15:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:25:02.145 15:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:25:02.145 15:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:25:02.145 15:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:02.145 15:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:25:02.145 15:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:02.145 15:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:02.145 15:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:25:02.145 15:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:02.145 15:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:02.145 15:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:02.145 15:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:25:02.145 15:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:25:02.145 15:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:25:02.145 15:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:25:02.145 15:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:25:02.145 15:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:25:02.145 15:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:25:02.145 15:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:25:02.145 15:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:25:02.145 15:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:25:02.145 15:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=59322 00:25:02.145 Process raid pid: 59322 00:25:02.145 15:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 59322' 00:25:02.145 15:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 59322 00:25:02.145 15:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 59322 ']' 00:25:02.145 15:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:02.145 15:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:02.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:02.145 15:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:02.145 15:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:02.145 15:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:02.145 15:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:02.145 [2024-11-05 15:54:34.345911] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:25:02.145 [2024-11-05 15:54:34.346017] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:02.145 [2024-11-05 15:54:34.504604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.402 [2024-11-05 15:54:34.605163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.402 [2024-11-05 15:54:34.742856] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:02.402 [2024-11-05 15:54:34.742896] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:02.968 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:02.968 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:25:02.968 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:02.968 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.968 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:02.968 [2024-11-05 15:54:35.192392] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:02.968 [2024-11-05 15:54:35.192444] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:02.968 [2024-11-05 15:54:35.192454] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:02.968 [2024-11-05 15:54:35.192464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:02.968 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.968 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:25:02.968 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:02.968 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:02.968 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:02.968 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:02.968 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:02.968 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:02.968 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:02.968 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:02.968 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:02.968 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:02.968 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:02.968 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.968 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:02.968 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.968 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:02.968 "name": "Existed_Raid", 00:25:02.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:02.968 "strip_size_kb": 64, 00:25:02.968 "state": "configuring", 00:25:02.968 "raid_level": "raid0", 00:25:02.968 "superblock": false, 00:25:02.968 "num_base_bdevs": 2, 00:25:02.968 "num_base_bdevs_discovered": 0, 00:25:02.968 "num_base_bdevs_operational": 2, 00:25:02.968 "base_bdevs_list": [ 00:25:02.968 { 00:25:02.968 "name": "BaseBdev1", 00:25:02.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:02.968 "is_configured": false, 00:25:02.968 "data_offset": 0, 00:25:02.968 "data_size": 0 00:25:02.968 }, 00:25:02.968 { 00:25:02.968 "name": "BaseBdev2", 00:25:02.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:02.968 "is_configured": false, 00:25:02.968 "data_offset": 0, 00:25:02.968 "data_size": 0 00:25:02.968 } 00:25:02.968 ] 00:25:02.968 }' 00:25:02.968 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:02.968 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.226 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:03.226 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.226 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.226 [2024-11-05 15:54:35.520470] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:03.226 [2024-11-05 15:54:35.520506] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:03.226 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.226 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:03.226 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.226 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.226 [2024-11-05 15:54:35.528433] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:03.226 [2024-11-05 15:54:35.528472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:03.226 [2024-11-05 15:54:35.528481] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:03.226 [2024-11-05 15:54:35.528492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:03.226 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.226 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:03.226 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.226 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.226 [2024-11-05 15:54:35.564178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:03.226 BaseBdev1 00:25:03.226 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.226 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:25:03.226 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:25:03.226 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:03.226 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:25:03.226 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:03.226 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:03.226 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:25:03.226 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.226 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.226 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.226 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:03.227 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.227 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.227 [ 00:25:03.227 { 00:25:03.227 "name": "BaseBdev1", 00:25:03.227 "aliases": [ 00:25:03.227 "90347e61-8a2d-4f49-998b-3ac0a816d739" 00:25:03.227 ], 00:25:03.227 "product_name": "Malloc disk", 00:25:03.227 "block_size": 512, 00:25:03.227 "num_blocks": 65536, 00:25:03.227 "uuid": "90347e61-8a2d-4f49-998b-3ac0a816d739", 00:25:03.227 "assigned_rate_limits": { 00:25:03.227 "rw_ios_per_sec": 0, 00:25:03.227 "rw_mbytes_per_sec": 0, 00:25:03.227 "r_mbytes_per_sec": 0, 00:25:03.227 "w_mbytes_per_sec": 0 00:25:03.227 }, 00:25:03.227 "claimed": true, 00:25:03.227 "claim_type": "exclusive_write", 00:25:03.227 "zoned": false, 00:25:03.227 "supported_io_types": { 00:25:03.227 "read": true, 00:25:03.227 "write": true, 00:25:03.227 "unmap": true, 00:25:03.227 "flush": true, 00:25:03.227 "reset": true, 00:25:03.227 "nvme_admin": false, 00:25:03.227 "nvme_io": false, 00:25:03.227 "nvme_io_md": false, 00:25:03.227 "write_zeroes": true, 00:25:03.227 "zcopy": true, 00:25:03.227 "get_zone_info": false, 00:25:03.227 "zone_management": false, 00:25:03.227 "zone_append": false, 00:25:03.227 "compare": false, 00:25:03.227 "compare_and_write": false, 00:25:03.227 "abort": true, 00:25:03.227 "seek_hole": false, 00:25:03.227 "seek_data": false, 00:25:03.227 "copy": true, 00:25:03.227 "nvme_iov_md": false 00:25:03.227 }, 00:25:03.227 "memory_domains": [ 00:25:03.227 { 00:25:03.227 "dma_device_id": "system", 00:25:03.227 "dma_device_type": 1 00:25:03.227 }, 00:25:03.227 { 00:25:03.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:03.227 "dma_device_type": 2 00:25:03.227 } 00:25:03.227 ], 00:25:03.227 "driver_specific": {} 00:25:03.227 } 00:25:03.227 ] 00:25:03.227 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.227 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:25:03.227 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:25:03.227 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:03.227 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:03.227 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:03.227 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:03.227 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:03.227 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:03.227 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:03.227 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:03.227 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:03.227 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:03.227 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:03.227 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.227 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.227 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.227 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:03.227 "name": "Existed_Raid", 00:25:03.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.227 "strip_size_kb": 64, 00:25:03.227 "state": "configuring", 00:25:03.227 "raid_level": "raid0", 00:25:03.227 "superblock": false, 00:25:03.227 "num_base_bdevs": 2, 00:25:03.227 "num_base_bdevs_discovered": 1, 00:25:03.227 "num_base_bdevs_operational": 2, 00:25:03.227 "base_bdevs_list": [ 00:25:03.227 { 00:25:03.227 "name": "BaseBdev1", 00:25:03.227 "uuid": "90347e61-8a2d-4f49-998b-3ac0a816d739", 00:25:03.227 "is_configured": true, 00:25:03.227 "data_offset": 0, 00:25:03.227 "data_size": 65536 00:25:03.227 }, 00:25:03.227 { 00:25:03.227 "name": "BaseBdev2", 00:25:03.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.227 "is_configured": false, 00:25:03.227 "data_offset": 0, 00:25:03.227 "data_size": 0 00:25:03.227 } 00:25:03.227 ] 00:25:03.227 }' 00:25:03.227 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:03.227 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.486 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:03.486 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.486 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.486 [2024-11-05 15:54:35.884283] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:03.486 [2024-11-05 15:54:35.884334] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:25:03.486 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.486 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:03.486 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.486 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.486 [2024-11-05 15:54:35.892323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:03.486 [2024-11-05 15:54:35.894167] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:03.486 [2024-11-05 15:54:35.894206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:03.486 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.486 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:25:03.486 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:03.486 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:25:03.486 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:03.486 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:03.486 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:03.486 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:03.486 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:03.486 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:03.486 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:03.486 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:03.486 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:03.486 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:03.486 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.486 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.486 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:03.781 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.781 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:03.781 "name": "Existed_Raid", 00:25:03.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.781 "strip_size_kb": 64, 00:25:03.781 "state": "configuring", 00:25:03.781 "raid_level": "raid0", 00:25:03.781 "superblock": false, 00:25:03.781 "num_base_bdevs": 2, 00:25:03.781 "num_base_bdevs_discovered": 1, 00:25:03.781 "num_base_bdevs_operational": 2, 00:25:03.781 "base_bdevs_list": [ 00:25:03.781 { 00:25:03.781 "name": "BaseBdev1", 00:25:03.781 "uuid": "90347e61-8a2d-4f49-998b-3ac0a816d739", 00:25:03.781 "is_configured": true, 00:25:03.781 "data_offset": 0, 00:25:03.781 "data_size": 65536 00:25:03.781 }, 00:25:03.781 { 00:25:03.781 "name": "BaseBdev2", 00:25:03.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.781 "is_configured": false, 00:25:03.781 "data_offset": 0, 00:25:03.781 "data_size": 0 00:25:03.781 } 00:25:03.781 ] 00:25:03.781 }' 00:25:03.781 15:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:03.781 15:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.781 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:03.781 15:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.781 15:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.064 [2024-11-05 15:54:36.214781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:04.064 [2024-11-05 15:54:36.214823] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:04.064 [2024-11-05 15:54:36.214831] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:25:04.064 [2024-11-05 15:54:36.215111] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:04.064 [2024-11-05 15:54:36.215253] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:04.064 [2024-11-05 15:54:36.215266] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:25:04.064 [2024-11-05 15:54:36.215479] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:04.064 BaseBdev2 00:25:04.064 15:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.064 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:25:04.064 15:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:25:04.064 15:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:04.064 15:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:25:04.064 15:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:04.064 15:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:04.064 15:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:25:04.064 15:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.064 15:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.064 15:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.064 15:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:04.064 15:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.064 15:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.064 [ 00:25:04.064 { 00:25:04.064 "name": "BaseBdev2", 00:25:04.064 "aliases": [ 00:25:04.064 "e8b2d24d-df37-45fa-97e9-b22e96f64a75" 00:25:04.064 ], 00:25:04.064 "product_name": "Malloc disk", 00:25:04.064 "block_size": 512, 00:25:04.064 "num_blocks": 65536, 00:25:04.064 "uuid": "e8b2d24d-df37-45fa-97e9-b22e96f64a75", 00:25:04.064 "assigned_rate_limits": { 00:25:04.064 "rw_ios_per_sec": 0, 00:25:04.064 "rw_mbytes_per_sec": 0, 00:25:04.064 "r_mbytes_per_sec": 0, 00:25:04.064 "w_mbytes_per_sec": 0 00:25:04.064 }, 00:25:04.064 "claimed": true, 00:25:04.064 "claim_type": "exclusive_write", 00:25:04.064 "zoned": false, 00:25:04.064 "supported_io_types": { 00:25:04.064 "read": true, 00:25:04.064 "write": true, 00:25:04.064 "unmap": true, 00:25:04.064 "flush": true, 00:25:04.064 "reset": true, 00:25:04.064 "nvme_admin": false, 00:25:04.064 "nvme_io": false, 00:25:04.064 "nvme_io_md": false, 00:25:04.064 "write_zeroes": true, 00:25:04.064 "zcopy": true, 00:25:04.064 "get_zone_info": false, 00:25:04.064 "zone_management": false, 00:25:04.064 "zone_append": false, 00:25:04.064 "compare": false, 00:25:04.064 "compare_and_write": false, 00:25:04.064 "abort": true, 00:25:04.064 "seek_hole": false, 00:25:04.064 "seek_data": false, 00:25:04.064 "copy": true, 00:25:04.064 "nvme_iov_md": false 00:25:04.064 }, 00:25:04.064 "memory_domains": [ 00:25:04.064 { 00:25:04.064 "dma_device_id": "system", 00:25:04.064 "dma_device_type": 1 00:25:04.064 }, 00:25:04.064 { 00:25:04.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:04.064 "dma_device_type": 2 00:25:04.064 } 00:25:04.064 ], 00:25:04.064 "driver_specific": {} 00:25:04.064 } 00:25:04.064 ] 00:25:04.064 15:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.064 15:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:25:04.064 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:04.064 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:04.064 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:25:04.065 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:04.065 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:04.065 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:04.065 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:04.065 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:04.065 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:04.065 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:04.065 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:04.065 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:04.065 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:04.065 15:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.065 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:04.065 15:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.065 15:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.065 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:04.065 "name": "Existed_Raid", 00:25:04.065 "uuid": "bc53a2e5-284c-4625-8242-fe2d2951ab93", 00:25:04.065 "strip_size_kb": 64, 00:25:04.065 "state": "online", 00:25:04.065 "raid_level": "raid0", 00:25:04.065 "superblock": false, 00:25:04.065 "num_base_bdevs": 2, 00:25:04.065 "num_base_bdevs_discovered": 2, 00:25:04.065 "num_base_bdevs_operational": 2, 00:25:04.065 "base_bdevs_list": [ 00:25:04.065 { 00:25:04.065 "name": "BaseBdev1", 00:25:04.065 "uuid": "90347e61-8a2d-4f49-998b-3ac0a816d739", 00:25:04.065 "is_configured": true, 00:25:04.065 "data_offset": 0, 00:25:04.065 "data_size": 65536 00:25:04.065 }, 00:25:04.065 { 00:25:04.065 "name": "BaseBdev2", 00:25:04.065 "uuid": "e8b2d24d-df37-45fa-97e9-b22e96f64a75", 00:25:04.065 "is_configured": true, 00:25:04.065 "data_offset": 0, 00:25:04.065 "data_size": 65536 00:25:04.065 } 00:25:04.065 ] 00:25:04.065 }' 00:25:04.065 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:04.065 15:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.323 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:25:04.323 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:04.323 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:04.323 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:04.323 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:04.323 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:04.323 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:04.323 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:04.323 15:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.323 15:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.323 [2024-11-05 15:54:36.551125] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:04.323 15:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.323 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:04.323 "name": "Existed_Raid", 00:25:04.323 "aliases": [ 00:25:04.323 "bc53a2e5-284c-4625-8242-fe2d2951ab93" 00:25:04.323 ], 00:25:04.323 "product_name": "Raid Volume", 00:25:04.323 "block_size": 512, 00:25:04.323 "num_blocks": 131072, 00:25:04.323 "uuid": "bc53a2e5-284c-4625-8242-fe2d2951ab93", 00:25:04.323 "assigned_rate_limits": { 00:25:04.323 "rw_ios_per_sec": 0, 00:25:04.323 "rw_mbytes_per_sec": 0, 00:25:04.323 "r_mbytes_per_sec": 0, 00:25:04.323 "w_mbytes_per_sec": 0 00:25:04.323 }, 00:25:04.323 "claimed": false, 00:25:04.323 "zoned": false, 00:25:04.323 "supported_io_types": { 00:25:04.323 "read": true, 00:25:04.323 "write": true, 00:25:04.323 "unmap": true, 00:25:04.323 "flush": true, 00:25:04.323 "reset": true, 00:25:04.323 "nvme_admin": false, 00:25:04.323 "nvme_io": false, 00:25:04.323 "nvme_io_md": false, 00:25:04.323 "write_zeroes": true, 00:25:04.323 "zcopy": false, 00:25:04.323 "get_zone_info": false, 00:25:04.323 "zone_management": false, 00:25:04.323 "zone_append": false, 00:25:04.323 "compare": false, 00:25:04.323 "compare_and_write": false, 00:25:04.323 "abort": false, 00:25:04.323 "seek_hole": false, 00:25:04.323 "seek_data": false, 00:25:04.323 "copy": false, 00:25:04.323 "nvme_iov_md": false 00:25:04.323 }, 00:25:04.323 "memory_domains": [ 00:25:04.323 { 00:25:04.323 "dma_device_id": "system", 00:25:04.323 "dma_device_type": 1 00:25:04.323 }, 00:25:04.323 { 00:25:04.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:04.323 "dma_device_type": 2 00:25:04.323 }, 00:25:04.323 { 00:25:04.323 "dma_device_id": "system", 00:25:04.323 "dma_device_type": 1 00:25:04.323 }, 00:25:04.323 { 00:25:04.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:04.323 "dma_device_type": 2 00:25:04.323 } 00:25:04.323 ], 00:25:04.323 "driver_specific": { 00:25:04.323 "raid": { 00:25:04.323 "uuid": "bc53a2e5-284c-4625-8242-fe2d2951ab93", 00:25:04.323 "strip_size_kb": 64, 00:25:04.323 "state": "online", 00:25:04.323 "raid_level": "raid0", 00:25:04.323 "superblock": false, 00:25:04.323 "num_base_bdevs": 2, 00:25:04.323 "num_base_bdevs_discovered": 2, 00:25:04.323 "num_base_bdevs_operational": 2, 00:25:04.323 "base_bdevs_list": [ 00:25:04.323 { 00:25:04.323 "name": "BaseBdev1", 00:25:04.323 "uuid": "90347e61-8a2d-4f49-998b-3ac0a816d739", 00:25:04.323 "is_configured": true, 00:25:04.323 "data_offset": 0, 00:25:04.323 "data_size": 65536 00:25:04.323 }, 00:25:04.323 { 00:25:04.323 "name": "BaseBdev2", 00:25:04.323 "uuid": "e8b2d24d-df37-45fa-97e9-b22e96f64a75", 00:25:04.324 "is_configured": true, 00:25:04.324 "data_offset": 0, 00:25:04.324 "data_size": 65536 00:25:04.324 } 00:25:04.324 ] 00:25:04.324 } 00:25:04.324 } 00:25:04.324 }' 00:25:04.324 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:04.324 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:25:04.324 BaseBdev2' 00:25:04.324 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:04.324 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:04.324 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:04.324 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:25:04.324 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:04.324 15:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.324 15:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.324 15:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.324 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:04.324 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:04.324 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:04.324 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:04.324 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:04.324 15:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.324 15:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.324 15:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.324 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:04.324 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:04.324 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:04.324 15:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.324 15:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.324 [2024-11-05 15:54:36.706947] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:04.324 [2024-11-05 15:54:36.706976] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:04.324 [2024-11-05 15:54:36.707017] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:04.582 15:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.582 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:25:04.582 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:25:04.582 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:04.582 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:25:04.582 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:25:04.582 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:25:04.582 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:04.582 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:25:04.582 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:04.582 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:04.582 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:04.582 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:04.582 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:04.582 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:04.582 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:04.582 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:04.582 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:04.582 15:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.582 15:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.582 15:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.582 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:04.582 "name": "Existed_Raid", 00:25:04.582 "uuid": "bc53a2e5-284c-4625-8242-fe2d2951ab93", 00:25:04.582 "strip_size_kb": 64, 00:25:04.582 "state": "offline", 00:25:04.582 "raid_level": "raid0", 00:25:04.582 "superblock": false, 00:25:04.582 "num_base_bdevs": 2, 00:25:04.582 "num_base_bdevs_discovered": 1, 00:25:04.582 "num_base_bdevs_operational": 1, 00:25:04.582 "base_bdevs_list": [ 00:25:04.582 { 00:25:04.582 "name": null, 00:25:04.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.582 "is_configured": false, 00:25:04.582 "data_offset": 0, 00:25:04.582 "data_size": 65536 00:25:04.582 }, 00:25:04.582 { 00:25:04.582 "name": "BaseBdev2", 00:25:04.582 "uuid": "e8b2d24d-df37-45fa-97e9-b22e96f64a75", 00:25:04.582 "is_configured": true, 00:25:04.582 "data_offset": 0, 00:25:04.582 "data_size": 65536 00:25:04.582 } 00:25:04.582 ] 00:25:04.582 }' 00:25:04.582 15:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:04.582 15:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.841 15:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:25:04.841 15:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:04.841 15:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:04.841 15:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:04.841 15:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.841 15:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.841 15:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.841 15:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:04.841 15:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:04.841 15:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:25:04.841 15:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.841 15:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.841 [2024-11-05 15:54:37.101563] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:04.841 [2024-11-05 15:54:37.101608] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:25:04.841 15:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.841 15:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:04.841 15:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:04.841 15:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:04.841 15:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.841 15:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:25:04.841 15:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.841 15:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.841 15:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:25:04.841 15:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:25:04.841 15:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:25:04.841 15:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 59322 00:25:04.841 15:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 59322 ']' 00:25:04.841 15:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 59322 00:25:04.841 15:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:25:04.841 15:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:04.841 15:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59322 00:25:04.841 15:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:04.841 killing process with pid 59322 00:25:04.841 15:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:04.841 15:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59322' 00:25:04.841 15:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 59322 00:25:04.841 [2024-11-05 15:54:37.211621] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:04.841 15:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 59322 00:25:04.841 [2024-11-05 15:54:37.220245] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:05.407 15:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:25:05.407 00:25:05.407 real 0m3.509s 00:25:05.407 user 0m5.122s 00:25:05.407 sys 0m0.567s 00:25:05.407 ************************************ 00:25:05.407 END TEST raid_state_function_test 00:25:05.407 ************************************ 00:25:05.407 15:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:05.407 15:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.407 15:54:37 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:25:05.407 15:54:37 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:25:05.407 15:54:37 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:05.407 15:54:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:05.665 ************************************ 00:25:05.665 START TEST raid_state_function_test_sb 00:25:05.665 ************************************ 00:25:05.665 15:54:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 true 00:25:05.665 15:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:25:05.665 15:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:25:05.665 15:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:25:05.665 15:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:25:05.665 15:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:25:05.665 15:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:05.665 15:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:25:05.665 15:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:05.665 15:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:05.665 15:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:25:05.665 15:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:05.665 15:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:05.665 15:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:05.665 15:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:25:05.665 15:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:25:05.665 15:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:25:05.665 15:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:25:05.665 15:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:25:05.665 15:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:25:05.665 15:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:25:05.665 15:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:25:05.665 15:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:25:05.665 15:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:25:05.665 15:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=59559 00:25:05.665 Process raid pid: 59559 00:25:05.665 15:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 59559' 00:25:05.665 15:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 59559 00:25:05.665 15:54:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 59559 ']' 00:25:05.665 15:54:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.665 15:54:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:05.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.665 15:54:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.665 15:54:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:05.665 15:54:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:05.665 15:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:05.665 [2024-11-05 15:54:37.895150] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:25:05.665 [2024-11-05 15:54:37.895267] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:05.665 [2024-11-05 15:54:38.068267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.922 [2024-11-05 15:54:38.171591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.922 [2024-11-05 15:54:38.309743] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:05.922 [2024-11-05 15:54:38.309786] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:06.487 15:54:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:06.487 15:54:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:25:06.487 15:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:06.487 15:54:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.487 15:54:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:06.487 [2024-11-05 15:54:38.707749] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:06.487 [2024-11-05 15:54:38.707802] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:06.487 [2024-11-05 15:54:38.707813] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:06.487 [2024-11-05 15:54:38.707823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:06.487 15:54:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.487 15:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:25:06.487 15:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:06.487 15:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:06.487 15:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:06.487 15:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:06.487 15:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:06.487 15:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:06.487 15:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:06.487 15:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:06.487 15:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:06.487 15:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:06.487 15:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:06.487 15:54:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.487 15:54:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:06.488 15:54:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.488 15:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:06.488 "name": "Existed_Raid", 00:25:06.488 "uuid": "9a452cb8-34eb-43af-940f-f71355af0cec", 00:25:06.488 "strip_size_kb": 64, 00:25:06.488 "state": "configuring", 00:25:06.488 "raid_level": "raid0", 00:25:06.488 "superblock": true, 00:25:06.488 "num_base_bdevs": 2, 00:25:06.488 "num_base_bdevs_discovered": 0, 00:25:06.488 "num_base_bdevs_operational": 2, 00:25:06.488 "base_bdevs_list": [ 00:25:06.488 { 00:25:06.488 "name": "BaseBdev1", 00:25:06.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.488 "is_configured": false, 00:25:06.488 "data_offset": 0, 00:25:06.488 "data_size": 0 00:25:06.488 }, 00:25:06.488 { 00:25:06.488 "name": "BaseBdev2", 00:25:06.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.488 "is_configured": false, 00:25:06.488 "data_offset": 0, 00:25:06.488 "data_size": 0 00:25:06.488 } 00:25:06.488 ] 00:25:06.488 }' 00:25:06.488 15:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:06.488 15:54:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:06.746 [2024-11-05 15:54:39.035767] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:06.746 [2024-11-05 15:54:39.035803] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:06.746 [2024-11-05 15:54:39.043768] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:06.746 [2024-11-05 15:54:39.043808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:06.746 [2024-11-05 15:54:39.043817] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:06.746 [2024-11-05 15:54:39.043828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:06.746 [2024-11-05 15:54:39.076136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:06.746 BaseBdev1 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:06.746 [ 00:25:06.746 { 00:25:06.746 "name": "BaseBdev1", 00:25:06.746 "aliases": [ 00:25:06.746 "3bf4e619-633c-46e5-9372-eaa9fa6dd58d" 00:25:06.746 ], 00:25:06.746 "product_name": "Malloc disk", 00:25:06.746 "block_size": 512, 00:25:06.746 "num_blocks": 65536, 00:25:06.746 "uuid": "3bf4e619-633c-46e5-9372-eaa9fa6dd58d", 00:25:06.746 "assigned_rate_limits": { 00:25:06.746 "rw_ios_per_sec": 0, 00:25:06.746 "rw_mbytes_per_sec": 0, 00:25:06.746 "r_mbytes_per_sec": 0, 00:25:06.746 "w_mbytes_per_sec": 0 00:25:06.746 }, 00:25:06.746 "claimed": true, 00:25:06.746 "claim_type": "exclusive_write", 00:25:06.746 "zoned": false, 00:25:06.746 "supported_io_types": { 00:25:06.746 "read": true, 00:25:06.746 "write": true, 00:25:06.746 "unmap": true, 00:25:06.746 "flush": true, 00:25:06.746 "reset": true, 00:25:06.746 "nvme_admin": false, 00:25:06.746 "nvme_io": false, 00:25:06.746 "nvme_io_md": false, 00:25:06.746 "write_zeroes": true, 00:25:06.746 "zcopy": true, 00:25:06.746 "get_zone_info": false, 00:25:06.746 "zone_management": false, 00:25:06.746 "zone_append": false, 00:25:06.746 "compare": false, 00:25:06.746 "compare_and_write": false, 00:25:06.746 "abort": true, 00:25:06.746 "seek_hole": false, 00:25:06.746 "seek_data": false, 00:25:06.746 "copy": true, 00:25:06.746 "nvme_iov_md": false 00:25:06.746 }, 00:25:06.746 "memory_domains": [ 00:25:06.746 { 00:25:06.746 "dma_device_id": "system", 00:25:06.746 "dma_device_type": 1 00:25:06.746 }, 00:25:06.746 { 00:25:06.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:06.746 "dma_device_type": 2 00:25:06.746 } 00:25:06.746 ], 00:25:06.746 "driver_specific": {} 00:25:06.746 } 00:25:06.746 ] 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:06.746 "name": "Existed_Raid", 00:25:06.746 "uuid": "5403e2b8-a5e3-4bc6-84a3-d64172d5c9a8", 00:25:06.746 "strip_size_kb": 64, 00:25:06.746 "state": "configuring", 00:25:06.746 "raid_level": "raid0", 00:25:06.746 "superblock": true, 00:25:06.746 "num_base_bdevs": 2, 00:25:06.746 "num_base_bdevs_discovered": 1, 00:25:06.746 "num_base_bdevs_operational": 2, 00:25:06.746 "base_bdevs_list": [ 00:25:06.746 { 00:25:06.746 "name": "BaseBdev1", 00:25:06.746 "uuid": "3bf4e619-633c-46e5-9372-eaa9fa6dd58d", 00:25:06.746 "is_configured": true, 00:25:06.746 "data_offset": 2048, 00:25:06.746 "data_size": 63488 00:25:06.746 }, 00:25:06.746 { 00:25:06.746 "name": "BaseBdev2", 00:25:06.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.746 "is_configured": false, 00:25:06.746 "data_offset": 0, 00:25:06.746 "data_size": 0 00:25:06.746 } 00:25:06.746 ] 00:25:06.746 }' 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:06.746 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.312 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:07.312 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.312 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.312 [2024-11-05 15:54:39.432253] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:07.312 [2024-11-05 15:54:39.432297] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:25:07.312 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.312 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:07.312 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.312 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.312 [2024-11-05 15:54:39.440306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:07.312 [2024-11-05 15:54:39.442140] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:07.312 [2024-11-05 15:54:39.442178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:07.312 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.312 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:25:07.312 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:07.312 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:25:07.312 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:07.312 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:07.312 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:07.312 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:07.312 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:07.312 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:07.312 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:07.312 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:07.312 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:07.312 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:07.312 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:07.312 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.312 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.312 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.312 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:07.312 "name": "Existed_Raid", 00:25:07.312 "uuid": "ab18f87c-a4ae-4390-a78a-d335244be6f0", 00:25:07.312 "strip_size_kb": 64, 00:25:07.312 "state": "configuring", 00:25:07.312 "raid_level": "raid0", 00:25:07.312 "superblock": true, 00:25:07.312 "num_base_bdevs": 2, 00:25:07.312 "num_base_bdevs_discovered": 1, 00:25:07.312 "num_base_bdevs_operational": 2, 00:25:07.312 "base_bdevs_list": [ 00:25:07.312 { 00:25:07.312 "name": "BaseBdev1", 00:25:07.312 "uuid": "3bf4e619-633c-46e5-9372-eaa9fa6dd58d", 00:25:07.312 "is_configured": true, 00:25:07.312 "data_offset": 2048, 00:25:07.312 "data_size": 63488 00:25:07.312 }, 00:25:07.312 { 00:25:07.312 "name": "BaseBdev2", 00:25:07.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.312 "is_configured": false, 00:25:07.312 "data_offset": 0, 00:25:07.312 "data_size": 0 00:25:07.312 } 00:25:07.312 ] 00:25:07.312 }' 00:25:07.312 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:07.312 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.570 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:07.570 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.570 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.570 [2024-11-05 15:54:39.766885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:07.570 [2024-11-05 15:54:39.767073] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:07.570 [2024-11-05 15:54:39.767085] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:07.570 [2024-11-05 15:54:39.767337] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:07.570 [2024-11-05 15:54:39.767460] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:07.570 [2024-11-05 15:54:39.767471] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:25:07.570 [2024-11-05 15:54:39.767590] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:07.570 BaseBdev2 00:25:07.570 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.570 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:25:07.570 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:25:07.570 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:07.570 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:25:07.570 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:07.570 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:07.570 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:25:07.570 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.570 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.570 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.570 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:07.570 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.570 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.570 [ 00:25:07.570 { 00:25:07.570 "name": "BaseBdev2", 00:25:07.570 "aliases": [ 00:25:07.570 "e5762484-4ea5-4b6c-aa0f-25447f5bc91c" 00:25:07.570 ], 00:25:07.570 "product_name": "Malloc disk", 00:25:07.570 "block_size": 512, 00:25:07.570 "num_blocks": 65536, 00:25:07.570 "uuid": "e5762484-4ea5-4b6c-aa0f-25447f5bc91c", 00:25:07.570 "assigned_rate_limits": { 00:25:07.570 "rw_ios_per_sec": 0, 00:25:07.570 "rw_mbytes_per_sec": 0, 00:25:07.570 "r_mbytes_per_sec": 0, 00:25:07.570 "w_mbytes_per_sec": 0 00:25:07.570 }, 00:25:07.570 "claimed": true, 00:25:07.570 "claim_type": "exclusive_write", 00:25:07.570 "zoned": false, 00:25:07.570 "supported_io_types": { 00:25:07.570 "read": true, 00:25:07.570 "write": true, 00:25:07.570 "unmap": true, 00:25:07.570 "flush": true, 00:25:07.570 "reset": true, 00:25:07.570 "nvme_admin": false, 00:25:07.570 "nvme_io": false, 00:25:07.570 "nvme_io_md": false, 00:25:07.570 "write_zeroes": true, 00:25:07.570 "zcopy": true, 00:25:07.570 "get_zone_info": false, 00:25:07.570 "zone_management": false, 00:25:07.570 "zone_append": false, 00:25:07.570 "compare": false, 00:25:07.570 "compare_and_write": false, 00:25:07.570 "abort": true, 00:25:07.570 "seek_hole": false, 00:25:07.570 "seek_data": false, 00:25:07.570 "copy": true, 00:25:07.570 "nvme_iov_md": false 00:25:07.570 }, 00:25:07.570 "memory_domains": [ 00:25:07.571 { 00:25:07.571 "dma_device_id": "system", 00:25:07.571 "dma_device_type": 1 00:25:07.571 }, 00:25:07.571 { 00:25:07.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:07.571 "dma_device_type": 2 00:25:07.571 } 00:25:07.571 ], 00:25:07.571 "driver_specific": {} 00:25:07.571 } 00:25:07.571 ] 00:25:07.571 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.571 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:25:07.571 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:07.571 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:07.571 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:25:07.571 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:07.571 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:07.571 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:07.571 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:07.571 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:07.571 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:07.571 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:07.571 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:07.571 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:07.571 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:07.571 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:07.571 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.571 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.571 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.571 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:07.571 "name": "Existed_Raid", 00:25:07.571 "uuid": "ab18f87c-a4ae-4390-a78a-d335244be6f0", 00:25:07.571 "strip_size_kb": 64, 00:25:07.571 "state": "online", 00:25:07.571 "raid_level": "raid0", 00:25:07.571 "superblock": true, 00:25:07.571 "num_base_bdevs": 2, 00:25:07.571 "num_base_bdevs_discovered": 2, 00:25:07.571 "num_base_bdevs_operational": 2, 00:25:07.571 "base_bdevs_list": [ 00:25:07.571 { 00:25:07.571 "name": "BaseBdev1", 00:25:07.571 "uuid": "3bf4e619-633c-46e5-9372-eaa9fa6dd58d", 00:25:07.571 "is_configured": true, 00:25:07.571 "data_offset": 2048, 00:25:07.571 "data_size": 63488 00:25:07.571 }, 00:25:07.571 { 00:25:07.571 "name": "BaseBdev2", 00:25:07.571 "uuid": "e5762484-4ea5-4b6c-aa0f-25447f5bc91c", 00:25:07.571 "is_configured": true, 00:25:07.571 "data_offset": 2048, 00:25:07.571 "data_size": 63488 00:25:07.571 } 00:25:07.571 ] 00:25:07.571 }' 00:25:07.571 15:54:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:07.571 15:54:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.828 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:25:07.828 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:07.828 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:07.828 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:07.828 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:25:07.828 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:07.828 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:07.828 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:07.828 15:54:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.828 15:54:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.828 [2024-11-05 15:54:40.115299] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:07.828 15:54:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.828 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:07.828 "name": "Existed_Raid", 00:25:07.828 "aliases": [ 00:25:07.828 "ab18f87c-a4ae-4390-a78a-d335244be6f0" 00:25:07.828 ], 00:25:07.828 "product_name": "Raid Volume", 00:25:07.828 "block_size": 512, 00:25:07.828 "num_blocks": 126976, 00:25:07.828 "uuid": "ab18f87c-a4ae-4390-a78a-d335244be6f0", 00:25:07.828 "assigned_rate_limits": { 00:25:07.828 "rw_ios_per_sec": 0, 00:25:07.828 "rw_mbytes_per_sec": 0, 00:25:07.828 "r_mbytes_per_sec": 0, 00:25:07.828 "w_mbytes_per_sec": 0 00:25:07.828 }, 00:25:07.828 "claimed": false, 00:25:07.829 "zoned": false, 00:25:07.829 "supported_io_types": { 00:25:07.829 "read": true, 00:25:07.829 "write": true, 00:25:07.829 "unmap": true, 00:25:07.829 "flush": true, 00:25:07.829 "reset": true, 00:25:07.829 "nvme_admin": false, 00:25:07.829 "nvme_io": false, 00:25:07.829 "nvme_io_md": false, 00:25:07.829 "write_zeroes": true, 00:25:07.829 "zcopy": false, 00:25:07.829 "get_zone_info": false, 00:25:07.829 "zone_management": false, 00:25:07.829 "zone_append": false, 00:25:07.829 "compare": false, 00:25:07.829 "compare_and_write": false, 00:25:07.829 "abort": false, 00:25:07.829 "seek_hole": false, 00:25:07.829 "seek_data": false, 00:25:07.829 "copy": false, 00:25:07.829 "nvme_iov_md": false 00:25:07.829 }, 00:25:07.829 "memory_domains": [ 00:25:07.829 { 00:25:07.829 "dma_device_id": "system", 00:25:07.829 "dma_device_type": 1 00:25:07.829 }, 00:25:07.829 { 00:25:07.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:07.829 "dma_device_type": 2 00:25:07.829 }, 00:25:07.829 { 00:25:07.829 "dma_device_id": "system", 00:25:07.829 "dma_device_type": 1 00:25:07.829 }, 00:25:07.829 { 00:25:07.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:07.829 "dma_device_type": 2 00:25:07.829 } 00:25:07.829 ], 00:25:07.829 "driver_specific": { 00:25:07.829 "raid": { 00:25:07.829 "uuid": "ab18f87c-a4ae-4390-a78a-d335244be6f0", 00:25:07.829 "strip_size_kb": 64, 00:25:07.829 "state": "online", 00:25:07.829 "raid_level": "raid0", 00:25:07.829 "superblock": true, 00:25:07.829 "num_base_bdevs": 2, 00:25:07.829 "num_base_bdevs_discovered": 2, 00:25:07.829 "num_base_bdevs_operational": 2, 00:25:07.829 "base_bdevs_list": [ 00:25:07.829 { 00:25:07.829 "name": "BaseBdev1", 00:25:07.829 "uuid": "3bf4e619-633c-46e5-9372-eaa9fa6dd58d", 00:25:07.829 "is_configured": true, 00:25:07.829 "data_offset": 2048, 00:25:07.829 "data_size": 63488 00:25:07.829 }, 00:25:07.829 { 00:25:07.829 "name": "BaseBdev2", 00:25:07.829 "uuid": "e5762484-4ea5-4b6c-aa0f-25447f5bc91c", 00:25:07.829 "is_configured": true, 00:25:07.829 "data_offset": 2048, 00:25:07.829 "data_size": 63488 00:25:07.829 } 00:25:07.829 ] 00:25:07.829 } 00:25:07.829 } 00:25:07.829 }' 00:25:07.829 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:07.829 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:25:07.829 BaseBdev2' 00:25:07.829 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:07.829 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:07.829 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:07.829 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:25:07.829 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:07.829 15:54:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.829 15:54:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.829 15:54:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.829 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:07.829 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:07.829 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:07.829 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:07.829 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:07.829 15:54:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.829 15:54:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:08.087 15:54:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.087 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:08.087 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:08.087 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:08.087 15:54:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.087 15:54:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:08.087 [2024-11-05 15:54:40.271070] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:08.087 [2024-11-05 15:54:40.271102] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:08.087 [2024-11-05 15:54:40.271148] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:08.087 15:54:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.087 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:25:08.087 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:25:08.087 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:08.087 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:25:08.087 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:25:08.087 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:25:08.087 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:08.087 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:25:08.087 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:08.087 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:08.087 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:08.087 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:08.087 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:08.087 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:08.087 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:08.087 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:08.087 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:08.087 15:54:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.087 15:54:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:08.087 15:54:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.088 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:08.088 "name": "Existed_Raid", 00:25:08.088 "uuid": "ab18f87c-a4ae-4390-a78a-d335244be6f0", 00:25:08.088 "strip_size_kb": 64, 00:25:08.088 "state": "offline", 00:25:08.088 "raid_level": "raid0", 00:25:08.088 "superblock": true, 00:25:08.088 "num_base_bdevs": 2, 00:25:08.088 "num_base_bdevs_discovered": 1, 00:25:08.088 "num_base_bdevs_operational": 1, 00:25:08.088 "base_bdevs_list": [ 00:25:08.088 { 00:25:08.088 "name": null, 00:25:08.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:08.088 "is_configured": false, 00:25:08.088 "data_offset": 0, 00:25:08.088 "data_size": 63488 00:25:08.088 }, 00:25:08.088 { 00:25:08.088 "name": "BaseBdev2", 00:25:08.088 "uuid": "e5762484-4ea5-4b6c-aa0f-25447f5bc91c", 00:25:08.088 "is_configured": true, 00:25:08.088 "data_offset": 2048, 00:25:08.088 "data_size": 63488 00:25:08.088 } 00:25:08.088 ] 00:25:08.088 }' 00:25:08.088 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:08.088 15:54:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:08.346 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:25:08.346 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:08.346 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:08.346 15:54:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.346 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:08.346 15:54:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:08.346 15:54:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.346 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:08.346 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:08.346 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:25:08.346 15:54:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.346 15:54:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:08.346 [2024-11-05 15:54:40.677662] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:08.346 [2024-11-05 15:54:40.677712] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:25:08.346 15:54:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.346 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:08.346 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:08.346 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:08.346 15:54:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.346 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:25:08.347 15:54:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:08.347 15:54:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.605 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:25:08.605 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:25:08.605 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:25:08.605 15:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 59559 00:25:08.605 15:54:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 59559 ']' 00:25:08.605 15:54:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 59559 00:25:08.605 15:54:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:25:08.605 15:54:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:08.605 15:54:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59559 00:25:08.605 15:54:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:08.605 killing process with pid 59559 00:25:08.605 15:54:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:08.605 15:54:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59559' 00:25:08.605 15:54:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 59559 00:25:08.605 [2024-11-05 15:54:40.800338] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:08.605 15:54:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 59559 00:25:08.605 [2024-11-05 15:54:40.808838] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:09.169 15:54:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:25:09.169 00:25:09.169 real 0m3.533s 00:25:09.169 user 0m5.144s 00:25:09.169 sys 0m0.576s 00:25:09.169 15:54:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:09.169 ************************************ 00:25:09.169 END TEST raid_state_function_test_sb 00:25:09.169 ************************************ 00:25:09.169 15:54:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:09.169 15:54:41 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:25:09.169 15:54:41 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:25:09.169 15:54:41 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:09.169 15:54:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:09.169 ************************************ 00:25:09.169 START TEST raid_superblock_test 00:25:09.169 ************************************ 00:25:09.170 15:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 2 00:25:09.170 15:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:25:09.170 15:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:25:09.170 15:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:25:09.170 15:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:25:09.170 15:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:25:09.170 15:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:25:09.170 15:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:25:09.170 15:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:25:09.170 15:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:25:09.170 15:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:25:09.170 15:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:25:09.170 15:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:25:09.170 15:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:25:09.170 15:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:25:09.170 15:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:25:09.170 15:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:25:09.170 15:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=59794 00:25:09.170 15:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 59794 00:25:09.170 15:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 59794 ']' 00:25:09.170 15:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:09.170 15:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:09.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:09.170 15:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:09.170 15:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:09.170 15:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:25:09.170 15:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:09.170 [2024-11-05 15:54:41.454661] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:25:09.170 [2024-11-05 15:54:41.454752] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59794 ] 00:25:09.426 [2024-11-05 15:54:41.604928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.426 [2024-11-05 15:54:41.688264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.426 [2024-11-05 15:54:41.797982] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:09.426 [2024-11-05 15:54:41.798016] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:09.991 malloc1 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:09.991 [2024-11-05 15:54:42.336955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:09.991 [2024-11-05 15:54:42.337009] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:09.991 [2024-11-05 15:54:42.337027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:09.991 [2024-11-05 15:54:42.337035] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:09.991 [2024-11-05 15:54:42.338782] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:09.991 [2024-11-05 15:54:42.338817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:09.991 pt1 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:09.991 malloc2 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:09.991 [2024-11-05 15:54:42.368698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:09.991 [2024-11-05 15:54:42.368743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:09.991 [2024-11-05 15:54:42.368760] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:09.991 [2024-11-05 15:54:42.368768] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:09.991 [2024-11-05 15:54:42.370489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:09.991 [2024-11-05 15:54:42.370521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:09.991 pt2 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:09.991 [2024-11-05 15:54:42.376743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:09.991 [2024-11-05 15:54:42.378252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:09.991 [2024-11-05 15:54:42.378375] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:09.991 [2024-11-05 15:54:42.378384] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:09.991 [2024-11-05 15:54:42.378579] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:09.991 [2024-11-05 15:54:42.378689] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:09.991 [2024-11-05 15:54:42.378700] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:09.991 [2024-11-05 15:54:42.378804] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:09.991 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:09.992 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:09.992 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.992 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:09.992 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.249 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:10.249 "name": "raid_bdev1", 00:25:10.249 "uuid": "cfd96057-5eaf-4cb8-9d8c-b084afc65394", 00:25:10.249 "strip_size_kb": 64, 00:25:10.249 "state": "online", 00:25:10.249 "raid_level": "raid0", 00:25:10.249 "superblock": true, 00:25:10.249 "num_base_bdevs": 2, 00:25:10.249 "num_base_bdevs_discovered": 2, 00:25:10.249 "num_base_bdevs_operational": 2, 00:25:10.249 "base_bdevs_list": [ 00:25:10.249 { 00:25:10.249 "name": "pt1", 00:25:10.249 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:10.249 "is_configured": true, 00:25:10.249 "data_offset": 2048, 00:25:10.249 "data_size": 63488 00:25:10.249 }, 00:25:10.249 { 00:25:10.249 "name": "pt2", 00:25:10.249 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:10.249 "is_configured": true, 00:25:10.249 "data_offset": 2048, 00:25:10.249 "data_size": 63488 00:25:10.249 } 00:25:10.249 ] 00:25:10.249 }' 00:25:10.249 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:10.249 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.507 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:25:10.507 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:10.507 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:10.507 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:10.507 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:10.507 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:10.507 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.508 [2024-11-05 15:54:42.685033] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:10.508 "name": "raid_bdev1", 00:25:10.508 "aliases": [ 00:25:10.508 "cfd96057-5eaf-4cb8-9d8c-b084afc65394" 00:25:10.508 ], 00:25:10.508 "product_name": "Raid Volume", 00:25:10.508 "block_size": 512, 00:25:10.508 "num_blocks": 126976, 00:25:10.508 "uuid": "cfd96057-5eaf-4cb8-9d8c-b084afc65394", 00:25:10.508 "assigned_rate_limits": { 00:25:10.508 "rw_ios_per_sec": 0, 00:25:10.508 "rw_mbytes_per_sec": 0, 00:25:10.508 "r_mbytes_per_sec": 0, 00:25:10.508 "w_mbytes_per_sec": 0 00:25:10.508 }, 00:25:10.508 "claimed": false, 00:25:10.508 "zoned": false, 00:25:10.508 "supported_io_types": { 00:25:10.508 "read": true, 00:25:10.508 "write": true, 00:25:10.508 "unmap": true, 00:25:10.508 "flush": true, 00:25:10.508 "reset": true, 00:25:10.508 "nvme_admin": false, 00:25:10.508 "nvme_io": false, 00:25:10.508 "nvme_io_md": false, 00:25:10.508 "write_zeroes": true, 00:25:10.508 "zcopy": false, 00:25:10.508 "get_zone_info": false, 00:25:10.508 "zone_management": false, 00:25:10.508 "zone_append": false, 00:25:10.508 "compare": false, 00:25:10.508 "compare_and_write": false, 00:25:10.508 "abort": false, 00:25:10.508 "seek_hole": false, 00:25:10.508 "seek_data": false, 00:25:10.508 "copy": false, 00:25:10.508 "nvme_iov_md": false 00:25:10.508 }, 00:25:10.508 "memory_domains": [ 00:25:10.508 { 00:25:10.508 "dma_device_id": "system", 00:25:10.508 "dma_device_type": 1 00:25:10.508 }, 00:25:10.508 { 00:25:10.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:10.508 "dma_device_type": 2 00:25:10.508 }, 00:25:10.508 { 00:25:10.508 "dma_device_id": "system", 00:25:10.508 "dma_device_type": 1 00:25:10.508 }, 00:25:10.508 { 00:25:10.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:10.508 "dma_device_type": 2 00:25:10.508 } 00:25:10.508 ], 00:25:10.508 "driver_specific": { 00:25:10.508 "raid": { 00:25:10.508 "uuid": "cfd96057-5eaf-4cb8-9d8c-b084afc65394", 00:25:10.508 "strip_size_kb": 64, 00:25:10.508 "state": "online", 00:25:10.508 "raid_level": "raid0", 00:25:10.508 "superblock": true, 00:25:10.508 "num_base_bdevs": 2, 00:25:10.508 "num_base_bdevs_discovered": 2, 00:25:10.508 "num_base_bdevs_operational": 2, 00:25:10.508 "base_bdevs_list": [ 00:25:10.508 { 00:25:10.508 "name": "pt1", 00:25:10.508 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:10.508 "is_configured": true, 00:25:10.508 "data_offset": 2048, 00:25:10.508 "data_size": 63488 00:25:10.508 }, 00:25:10.508 { 00:25:10.508 "name": "pt2", 00:25:10.508 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:10.508 "is_configured": true, 00:25:10.508 "data_offset": 2048, 00:25:10.508 "data_size": 63488 00:25:10.508 } 00:25:10.508 ] 00:25:10.508 } 00:25:10.508 } 00:25:10.508 }' 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:10.508 pt2' 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:25:10.508 [2024-11-05 15:54:42.849036] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=cfd96057-5eaf-4cb8-9d8c-b084afc65394 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z cfd96057-5eaf-4cb8-9d8c-b084afc65394 ']' 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.508 [2024-11-05 15:54:42.876798] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:10.508 [2024-11-05 15:54:42.876820] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:10.508 [2024-11-05 15:54:42.876893] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:10.508 [2024-11-05 15:54:42.876932] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:10.508 [2024-11-05 15:54:42.876942] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.508 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.766 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.766 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:10.766 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:25:10.766 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.766 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.766 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.766 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:25:10.766 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.766 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.766 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:10.766 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.766 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:25:10.766 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:10.766 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:25:10.766 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:10.766 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:10.766 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:10.766 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:10.766 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:10.766 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:10.766 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.766 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.766 [2024-11-05 15:54:42.976831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:10.766 [2024-11-05 15:54:42.978405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:10.766 [2024-11-05 15:54:42.978464] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:10.766 [2024-11-05 15:54:42.978503] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:10.766 [2024-11-05 15:54:42.978514] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:10.766 [2024-11-05 15:54:42.978524] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:25:10.766 request: 00:25:10.766 { 00:25:10.766 "name": "raid_bdev1", 00:25:10.766 "raid_level": "raid0", 00:25:10.766 "base_bdevs": [ 00:25:10.766 "malloc1", 00:25:10.766 "malloc2" 00:25:10.766 ], 00:25:10.766 "strip_size_kb": 64, 00:25:10.766 "superblock": false, 00:25:10.766 "method": "bdev_raid_create", 00:25:10.766 "req_id": 1 00:25:10.766 } 00:25:10.766 Got JSON-RPC error response 00:25:10.766 response: 00:25:10.766 { 00:25:10.766 "code": -17, 00:25:10.766 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:10.766 } 00:25:10.766 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:10.766 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:25:10.766 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:10.766 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:10.766 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:10.766 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:25:10.766 15:54:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:10.766 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.766 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.766 15:54:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.766 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:25:10.766 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:25:10.766 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:10.766 15:54:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.766 15:54:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.766 [2024-11-05 15:54:43.020821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:10.766 [2024-11-05 15:54:43.020874] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:10.766 [2024-11-05 15:54:43.020889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:10.766 [2024-11-05 15:54:43.020898] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:10.766 [2024-11-05 15:54:43.022666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:10.766 [2024-11-05 15:54:43.022700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:10.766 [2024-11-05 15:54:43.022759] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:10.766 [2024-11-05 15:54:43.022802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:10.766 pt1 00:25:10.766 15:54:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.766 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:25:10.766 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:10.766 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:10.766 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:10.766 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:10.766 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:10.766 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:10.766 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:10.766 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:10.766 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:10.766 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:10.766 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:10.766 15:54:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.766 15:54:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.766 15:54:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.766 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:10.766 "name": "raid_bdev1", 00:25:10.766 "uuid": "cfd96057-5eaf-4cb8-9d8c-b084afc65394", 00:25:10.766 "strip_size_kb": 64, 00:25:10.766 "state": "configuring", 00:25:10.766 "raid_level": "raid0", 00:25:10.766 "superblock": true, 00:25:10.766 "num_base_bdevs": 2, 00:25:10.766 "num_base_bdevs_discovered": 1, 00:25:10.766 "num_base_bdevs_operational": 2, 00:25:10.766 "base_bdevs_list": [ 00:25:10.766 { 00:25:10.766 "name": "pt1", 00:25:10.766 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:10.766 "is_configured": true, 00:25:10.766 "data_offset": 2048, 00:25:10.766 "data_size": 63488 00:25:10.766 }, 00:25:10.766 { 00:25:10.766 "name": null, 00:25:10.766 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:10.766 "is_configured": false, 00:25:10.766 "data_offset": 2048, 00:25:10.766 "data_size": 63488 00:25:10.766 } 00:25:10.766 ] 00:25:10.766 }' 00:25:10.766 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:10.766 15:54:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:11.028 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:25:11.028 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:25:11.028 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:11.028 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:11.028 15:54:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.028 15:54:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:11.028 [2024-11-05 15:54:43.324908] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:11.028 [2024-11-05 15:54:43.324959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:11.028 [2024-11-05 15:54:43.324974] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:11.028 [2024-11-05 15:54:43.324982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:11.028 [2024-11-05 15:54:43.325324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:11.028 [2024-11-05 15:54:43.325345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:11.028 [2024-11-05 15:54:43.325401] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:11.028 [2024-11-05 15:54:43.325419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:11.028 [2024-11-05 15:54:43.325503] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:11.028 [2024-11-05 15:54:43.325511] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:11.028 [2024-11-05 15:54:43.325692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:11.028 [2024-11-05 15:54:43.325787] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:11.028 [2024-11-05 15:54:43.325794] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:25:11.028 [2024-11-05 15:54:43.325911] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:11.028 pt2 00:25:11.028 15:54:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.028 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:25:11.028 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:11.028 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:25:11.028 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:11.028 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:11.028 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:11.028 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:11.028 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:11.028 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:11.028 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:11.028 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:11.028 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:11.028 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:11.028 15:54:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.028 15:54:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:11.028 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:11.028 15:54:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.028 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:11.028 "name": "raid_bdev1", 00:25:11.028 "uuid": "cfd96057-5eaf-4cb8-9d8c-b084afc65394", 00:25:11.028 "strip_size_kb": 64, 00:25:11.028 "state": "online", 00:25:11.028 "raid_level": "raid0", 00:25:11.028 "superblock": true, 00:25:11.028 "num_base_bdevs": 2, 00:25:11.028 "num_base_bdevs_discovered": 2, 00:25:11.028 "num_base_bdevs_operational": 2, 00:25:11.028 "base_bdevs_list": [ 00:25:11.029 { 00:25:11.029 "name": "pt1", 00:25:11.029 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:11.029 "is_configured": true, 00:25:11.029 "data_offset": 2048, 00:25:11.029 "data_size": 63488 00:25:11.029 }, 00:25:11.029 { 00:25:11.029 "name": "pt2", 00:25:11.029 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:11.029 "is_configured": true, 00:25:11.029 "data_offset": 2048, 00:25:11.029 "data_size": 63488 00:25:11.029 } 00:25:11.029 ] 00:25:11.029 }' 00:25:11.029 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:11.029 15:54:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:11.286 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:25:11.286 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:11.286 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:11.286 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:11.286 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:11.286 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:11.286 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:11.286 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:11.286 15:54:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.286 15:54:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:11.286 [2024-11-05 15:54:43.633176] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:11.286 15:54:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.286 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:11.286 "name": "raid_bdev1", 00:25:11.286 "aliases": [ 00:25:11.286 "cfd96057-5eaf-4cb8-9d8c-b084afc65394" 00:25:11.286 ], 00:25:11.286 "product_name": "Raid Volume", 00:25:11.286 "block_size": 512, 00:25:11.286 "num_blocks": 126976, 00:25:11.286 "uuid": "cfd96057-5eaf-4cb8-9d8c-b084afc65394", 00:25:11.286 "assigned_rate_limits": { 00:25:11.286 "rw_ios_per_sec": 0, 00:25:11.286 "rw_mbytes_per_sec": 0, 00:25:11.286 "r_mbytes_per_sec": 0, 00:25:11.286 "w_mbytes_per_sec": 0 00:25:11.286 }, 00:25:11.286 "claimed": false, 00:25:11.286 "zoned": false, 00:25:11.286 "supported_io_types": { 00:25:11.286 "read": true, 00:25:11.286 "write": true, 00:25:11.286 "unmap": true, 00:25:11.286 "flush": true, 00:25:11.286 "reset": true, 00:25:11.286 "nvme_admin": false, 00:25:11.286 "nvme_io": false, 00:25:11.286 "nvme_io_md": false, 00:25:11.286 "write_zeroes": true, 00:25:11.286 "zcopy": false, 00:25:11.286 "get_zone_info": false, 00:25:11.286 "zone_management": false, 00:25:11.286 "zone_append": false, 00:25:11.286 "compare": false, 00:25:11.286 "compare_and_write": false, 00:25:11.286 "abort": false, 00:25:11.286 "seek_hole": false, 00:25:11.286 "seek_data": false, 00:25:11.286 "copy": false, 00:25:11.286 "nvme_iov_md": false 00:25:11.286 }, 00:25:11.286 "memory_domains": [ 00:25:11.286 { 00:25:11.286 "dma_device_id": "system", 00:25:11.286 "dma_device_type": 1 00:25:11.286 }, 00:25:11.286 { 00:25:11.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:11.286 "dma_device_type": 2 00:25:11.286 }, 00:25:11.286 { 00:25:11.286 "dma_device_id": "system", 00:25:11.286 "dma_device_type": 1 00:25:11.286 }, 00:25:11.286 { 00:25:11.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:11.286 "dma_device_type": 2 00:25:11.286 } 00:25:11.286 ], 00:25:11.286 "driver_specific": { 00:25:11.286 "raid": { 00:25:11.286 "uuid": "cfd96057-5eaf-4cb8-9d8c-b084afc65394", 00:25:11.286 "strip_size_kb": 64, 00:25:11.286 "state": "online", 00:25:11.286 "raid_level": "raid0", 00:25:11.286 "superblock": true, 00:25:11.286 "num_base_bdevs": 2, 00:25:11.286 "num_base_bdevs_discovered": 2, 00:25:11.286 "num_base_bdevs_operational": 2, 00:25:11.286 "base_bdevs_list": [ 00:25:11.286 { 00:25:11.286 "name": "pt1", 00:25:11.286 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:11.286 "is_configured": true, 00:25:11.286 "data_offset": 2048, 00:25:11.286 "data_size": 63488 00:25:11.286 }, 00:25:11.286 { 00:25:11.286 "name": "pt2", 00:25:11.286 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:11.286 "is_configured": true, 00:25:11.286 "data_offset": 2048, 00:25:11.286 "data_size": 63488 00:25:11.286 } 00:25:11.286 ] 00:25:11.286 } 00:25:11.286 } 00:25:11.286 }' 00:25:11.286 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:11.286 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:11.286 pt2' 00:25:11.286 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:11.543 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:11.543 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:11.543 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:11.543 15:54:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.543 15:54:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:11.543 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:11.543 15:54:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.543 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:11.543 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:11.543 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:11.543 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:11.543 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:11.543 15:54:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.543 15:54:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:11.543 15:54:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.543 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:11.543 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:11.543 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:25:11.544 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:11.544 15:54:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.544 15:54:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:11.544 [2024-11-05 15:54:43.785169] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:11.544 15:54:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.544 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' cfd96057-5eaf-4cb8-9d8c-b084afc65394 '!=' cfd96057-5eaf-4cb8-9d8c-b084afc65394 ']' 00:25:11.544 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:25:11.544 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:11.544 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:25:11.544 15:54:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 59794 00:25:11.544 15:54:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 59794 ']' 00:25:11.544 15:54:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 59794 00:25:11.544 15:54:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:25:11.544 15:54:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:11.544 15:54:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59794 00:25:11.544 15:54:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:11.544 killing process with pid 59794 00:25:11.544 15:54:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:11.544 15:54:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59794' 00:25:11.544 15:54:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 59794 00:25:11.544 [2024-11-05 15:54:43.843407] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:11.544 [2024-11-05 15:54:43.843480] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:11.544 15:54:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 59794 00:25:11.544 [2024-11-05 15:54:43.843521] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:11.544 [2024-11-05 15:54:43.843531] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:25:11.544 [2024-11-05 15:54:43.944568] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:12.109 15:54:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:25:12.109 00:25:12.109 real 0m3.099s 00:25:12.109 user 0m4.446s 00:25:12.109 sys 0m0.477s 00:25:12.109 15:54:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:12.109 15:54:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:12.109 ************************************ 00:25:12.109 END TEST raid_superblock_test 00:25:12.109 ************************************ 00:25:12.367 15:54:44 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:25:12.367 15:54:44 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:25:12.367 15:54:44 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:12.367 15:54:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:12.367 ************************************ 00:25:12.367 START TEST raid_read_error_test 00:25:12.367 ************************************ 00:25:12.367 15:54:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 read 00:25:12.367 15:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:25:12.367 15:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:25:12.367 15:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:25:12.367 15:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:25:12.367 15:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:12.367 15:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:25:12.367 15:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:12.367 15:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:12.367 15:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:25:12.367 15:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:12.367 15:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:12.367 15:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:12.367 15:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:25:12.367 15:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:25:12.367 15:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:25:12.367 15:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:25:12.367 15:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:25:12.367 15:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:25:12.367 15:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:25:12.367 15:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:25:12.367 15:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:25:12.367 15:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:25:12.367 15:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.UIdRzjWN0i 00:25:12.367 15:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=59995 00:25:12.367 15:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 59995 00:25:12.367 15:54:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 59995 ']' 00:25:12.368 15:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:25:12.368 15:54:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:12.368 15:54:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:12.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:12.368 15:54:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:12.368 15:54:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:12.368 15:54:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:12.368 [2024-11-05 15:54:44.608429] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:25:12.368 [2024-11-05 15:54:44.608578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59995 ] 00:25:12.368 [2024-11-05 15:54:44.764260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.626 [2024-11-05 15:54:44.847714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:12.626 [2024-11-05 15:54:44.957744] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:12.626 [2024-11-05 15:54:44.957781] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:13.193 15:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:13.193 15:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:25:13.193 15:54:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:13.193 15:54:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:13.193 15:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.193 15:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.193 BaseBdev1_malloc 00:25:13.193 15:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.193 15:54:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:25:13.193 15:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.193 15:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.193 true 00:25:13.193 15:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.193 15:54:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:25:13.193 15:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.193 15:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.193 [2024-11-05 15:54:45.503637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:25:13.193 [2024-11-05 15:54:45.503685] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:13.193 [2024-11-05 15:54:45.503700] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:25:13.193 [2024-11-05 15:54:45.503709] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:13.193 [2024-11-05 15:54:45.505465] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:13.193 [2024-11-05 15:54:45.505499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:13.193 BaseBdev1 00:25:13.193 15:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.193 15:54:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:13.193 15:54:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:13.193 15:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.193 15:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.193 BaseBdev2_malloc 00:25:13.193 15:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.193 15:54:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:25:13.193 15:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.193 15:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.193 true 00:25:13.193 15:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.193 15:54:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:25:13.193 15:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.193 15:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.193 [2024-11-05 15:54:45.543119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:25:13.193 [2024-11-05 15:54:45.543161] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:13.193 [2024-11-05 15:54:45.543173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:25:13.193 [2024-11-05 15:54:45.543182] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:13.193 [2024-11-05 15:54:45.544903] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:13.193 [2024-11-05 15:54:45.544933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:13.193 BaseBdev2 00:25:13.193 15:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.193 15:54:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:25:13.193 15:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.193 15:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.193 [2024-11-05 15:54:45.551171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:13.193 [2024-11-05 15:54:45.552650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:13.193 [2024-11-05 15:54:45.552800] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:13.193 [2024-11-05 15:54:45.552820] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:13.193 [2024-11-05 15:54:45.553022] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:25:13.194 [2024-11-05 15:54:45.553145] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:13.194 [2024-11-05 15:54:45.553157] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:25:13.194 [2024-11-05 15:54:45.553269] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:13.194 15:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.194 15:54:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:25:13.194 15:54:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:13.194 15:54:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:13.194 15:54:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:13.194 15:54:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:13.194 15:54:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:13.194 15:54:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:13.194 15:54:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:13.194 15:54:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:13.194 15:54:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:13.194 15:54:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:13.194 15:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.194 15:54:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:13.194 15:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.194 15:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.194 15:54:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:13.194 "name": "raid_bdev1", 00:25:13.194 "uuid": "578451af-04ac-4a2d-9ed9-c230bc401f99", 00:25:13.194 "strip_size_kb": 64, 00:25:13.194 "state": "online", 00:25:13.194 "raid_level": "raid0", 00:25:13.194 "superblock": true, 00:25:13.194 "num_base_bdevs": 2, 00:25:13.194 "num_base_bdevs_discovered": 2, 00:25:13.194 "num_base_bdevs_operational": 2, 00:25:13.194 "base_bdevs_list": [ 00:25:13.194 { 00:25:13.194 "name": "BaseBdev1", 00:25:13.194 "uuid": "bc792927-4110-59fe-82c0-360e24f915b2", 00:25:13.194 "is_configured": true, 00:25:13.194 "data_offset": 2048, 00:25:13.194 "data_size": 63488 00:25:13.194 }, 00:25:13.194 { 00:25:13.194 "name": "BaseBdev2", 00:25:13.194 "uuid": "a46bfb3b-7778-52d8-82d8-c6c62d8734dd", 00:25:13.194 "is_configured": true, 00:25:13.194 "data_offset": 2048, 00:25:13.194 "data_size": 63488 00:25:13.194 } 00:25:13.194 ] 00:25:13.194 }' 00:25:13.194 15:54:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:13.194 15:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.452 15:54:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:25:13.452 15:54:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:25:13.710 [2024-11-05 15:54:45.952013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:25:14.643 15:54:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:25:14.643 15:54:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.643 15:54:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.643 15:54:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.643 15:54:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:25:14.643 15:54:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:25:14.643 15:54:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:25:14.643 15:54:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:25:14.643 15:54:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:14.643 15:54:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:14.643 15:54:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:14.643 15:54:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:14.643 15:54:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:14.643 15:54:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:14.643 15:54:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:14.643 15:54:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:14.643 15:54:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:14.643 15:54:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:14.643 15:54:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.643 15:54:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:14.643 15:54:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.643 15:54:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.643 15:54:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:14.643 "name": "raid_bdev1", 00:25:14.643 "uuid": "578451af-04ac-4a2d-9ed9-c230bc401f99", 00:25:14.643 "strip_size_kb": 64, 00:25:14.643 "state": "online", 00:25:14.643 "raid_level": "raid0", 00:25:14.643 "superblock": true, 00:25:14.643 "num_base_bdevs": 2, 00:25:14.643 "num_base_bdevs_discovered": 2, 00:25:14.643 "num_base_bdevs_operational": 2, 00:25:14.643 "base_bdevs_list": [ 00:25:14.643 { 00:25:14.643 "name": "BaseBdev1", 00:25:14.643 "uuid": "bc792927-4110-59fe-82c0-360e24f915b2", 00:25:14.643 "is_configured": true, 00:25:14.643 "data_offset": 2048, 00:25:14.643 "data_size": 63488 00:25:14.643 }, 00:25:14.643 { 00:25:14.643 "name": "BaseBdev2", 00:25:14.643 "uuid": "a46bfb3b-7778-52d8-82d8-c6c62d8734dd", 00:25:14.643 "is_configured": true, 00:25:14.643 "data_offset": 2048, 00:25:14.643 "data_size": 63488 00:25:14.643 } 00:25:14.643 ] 00:25:14.643 }' 00:25:14.643 15:54:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:14.643 15:54:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.901 15:54:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:14.901 15:54:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.901 15:54:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.901 [2024-11-05 15:54:47.187016] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:14.901 [2024-11-05 15:54:47.187050] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:14.901 [2024-11-05 15:54:47.189446] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:14.901 [2024-11-05 15:54:47.189488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:14.901 [2024-11-05 15:54:47.189513] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:14.901 [2024-11-05 15:54:47.189522] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:25:14.901 { 00:25:14.901 "results": [ 00:25:14.901 { 00:25:14.901 "job": "raid_bdev1", 00:25:14.901 "core_mask": "0x1", 00:25:14.901 "workload": "randrw", 00:25:14.901 "percentage": 50, 00:25:14.901 "status": "finished", 00:25:14.901 "queue_depth": 1, 00:25:14.901 "io_size": 131072, 00:25:14.901 "runtime": 1.233524, 00:25:14.901 "iops": 18576.85784792189, 00:25:14.901 "mibps": 2322.107230990236, 00:25:14.901 "io_failed": 1, 00:25:14.901 "io_timeout": 0, 00:25:14.901 "avg_latency_us": 73.68505176094634, 00:25:14.901 "min_latency_us": 25.403076923076924, 00:25:14.901 "max_latency_us": 1392.64 00:25:14.901 } 00:25:14.901 ], 00:25:14.901 "core_count": 1 00:25:14.901 } 00:25:14.901 15:54:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.901 15:54:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 59995 00:25:14.901 15:54:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 59995 ']' 00:25:14.901 15:54:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 59995 00:25:14.901 15:54:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:25:14.901 15:54:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:14.901 15:54:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59995 00:25:14.901 15:54:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:14.901 killing process with pid 59995 00:25:14.901 15:54:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:14.901 15:54:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59995' 00:25:14.901 15:54:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 59995 00:25:14.901 [2024-11-05 15:54:47.217812] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:14.901 15:54:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 59995 00:25:14.901 [2024-11-05 15:54:47.284029] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:15.467 15:54:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:25:15.467 15:54:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.UIdRzjWN0i 00:25:15.467 15:54:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:25:15.725 15:54:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.81 00:25:15.725 15:54:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:25:15.725 15:54:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:15.725 15:54:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:25:15.725 15:54:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.81 != \0\.\0\0 ]] 00:25:15.725 00:25:15.725 real 0m3.347s 00:25:15.725 user 0m4.088s 00:25:15.725 sys 0m0.362s 00:25:15.725 15:54:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:15.725 15:54:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.725 ************************************ 00:25:15.725 END TEST raid_read_error_test 00:25:15.725 ************************************ 00:25:15.725 15:54:47 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:25:15.725 15:54:47 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:25:15.725 15:54:47 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:15.725 15:54:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:15.725 ************************************ 00:25:15.725 START TEST raid_write_error_test 00:25:15.725 ************************************ 00:25:15.725 15:54:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 write 00:25:15.725 15:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:25:15.726 15:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:25:15.726 15:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:25:15.726 15:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:25:15.726 15:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:15.726 15:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:25:15.726 15:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:15.726 15:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:15.726 15:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:25:15.726 15:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:15.726 15:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:15.726 15:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:15.726 15:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:25:15.726 15:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:25:15.726 15:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:25:15.726 15:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:25:15.726 15:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:25:15.726 15:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:25:15.726 15:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:25:15.726 15:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:25:15.726 15:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:25:15.726 15:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:25:15.726 15:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tj3iS3lkMN 00:25:15.726 15:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=60124 00:25:15.726 15:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 60124 00:25:15.726 15:54:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 60124 ']' 00:25:15.726 15:54:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:15.726 15:54:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:15.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:15.726 15:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:25:15.726 15:54:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:15.726 15:54:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:15.726 15:54:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.726 [2024-11-05 15:54:47.987677] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:25:15.726 [2024-11-05 15:54:47.987775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60124 ] 00:25:15.726 [2024-11-05 15:54:48.136527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.984 [2024-11-05 15:54:48.216102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.984 [2024-11-05 15:54:48.324569] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:15.984 [2024-11-05 15:54:48.324615] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:16.549 15:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:16.549 15:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:25:16.549 15:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:16.549 15:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:16.549 15:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.549 15:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.549 BaseBdev1_malloc 00:25:16.549 15:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.549 15:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:25:16.549 15:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.549 15:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.549 true 00:25:16.549 15:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.549 15:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:25:16.549 15:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.549 15:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.549 [2024-11-05 15:54:48.870186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:25:16.549 [2024-11-05 15:54:48.870236] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:16.549 [2024-11-05 15:54:48.870251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:25:16.549 [2024-11-05 15:54:48.870260] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:16.549 [2024-11-05 15:54:48.872027] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:16.549 [2024-11-05 15:54:48.872060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:16.549 BaseBdev1 00:25:16.549 15:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.549 15:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:16.550 15:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:16.550 15:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.550 15:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.550 BaseBdev2_malloc 00:25:16.550 15:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.550 15:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:25:16.550 15:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.550 15:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.550 true 00:25:16.550 15:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.550 15:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:25:16.550 15:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.550 15:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.550 [2024-11-05 15:54:48.909619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:25:16.550 [2024-11-05 15:54:48.909663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:16.550 [2024-11-05 15:54:48.909676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:25:16.550 [2024-11-05 15:54:48.909684] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:16.550 [2024-11-05 15:54:48.911406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:16.550 [2024-11-05 15:54:48.911440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:16.550 BaseBdev2 00:25:16.550 15:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.550 15:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:25:16.550 15:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.550 15:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.550 [2024-11-05 15:54:48.917672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:16.550 [2024-11-05 15:54:48.919192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:16.550 [2024-11-05 15:54:48.919345] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:16.550 [2024-11-05 15:54:48.919357] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:16.550 [2024-11-05 15:54:48.919548] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:25:16.550 [2024-11-05 15:54:48.919675] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:16.550 [2024-11-05 15:54:48.919684] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:25:16.550 [2024-11-05 15:54:48.919796] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:16.550 15:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.550 15:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:25:16.550 15:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:16.550 15:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:16.550 15:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:16.550 15:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:16.550 15:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:16.550 15:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:16.550 15:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:16.550 15:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:16.550 15:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:16.550 15:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:16.550 15:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.550 15:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.550 15:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:16.550 15:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.550 15:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:16.550 "name": "raid_bdev1", 00:25:16.550 "uuid": "b04001f5-6f81-48fe-8e3e-f7379e5276cb", 00:25:16.550 "strip_size_kb": 64, 00:25:16.550 "state": "online", 00:25:16.550 "raid_level": "raid0", 00:25:16.550 "superblock": true, 00:25:16.550 "num_base_bdevs": 2, 00:25:16.550 "num_base_bdevs_discovered": 2, 00:25:16.550 "num_base_bdevs_operational": 2, 00:25:16.550 "base_bdevs_list": [ 00:25:16.550 { 00:25:16.550 "name": "BaseBdev1", 00:25:16.550 "uuid": "fed4ce39-eeb4-5f1a-99da-934e0a9d1ced", 00:25:16.550 "is_configured": true, 00:25:16.550 "data_offset": 2048, 00:25:16.550 "data_size": 63488 00:25:16.550 }, 00:25:16.550 { 00:25:16.550 "name": "BaseBdev2", 00:25:16.550 "uuid": "78a9d73f-421d-574b-9a76-59952d366edf", 00:25:16.550 "is_configured": true, 00:25:16.550 "data_offset": 2048, 00:25:16.550 "data_size": 63488 00:25:16.550 } 00:25:16.550 ] 00:25:16.550 }' 00:25:16.550 15:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:16.550 15:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:17.115 15:54:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:25:17.115 15:54:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:25:17.115 [2024-11-05 15:54:49.318548] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:25:18.046 15:54:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:25:18.046 15:54:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.046 15:54:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.046 15:54:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.046 15:54:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:25:18.046 15:54:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:25:18.046 15:54:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:25:18.046 15:54:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:25:18.046 15:54:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:18.046 15:54:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:18.047 15:54:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:18.047 15:54:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:18.047 15:54:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:18.047 15:54:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:18.047 15:54:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:18.047 15:54:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:18.047 15:54:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:18.047 15:54:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:18.047 15:54:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:18.047 15:54:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.047 15:54:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.047 15:54:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.047 15:54:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:18.047 "name": "raid_bdev1", 00:25:18.047 "uuid": "b04001f5-6f81-48fe-8e3e-f7379e5276cb", 00:25:18.047 "strip_size_kb": 64, 00:25:18.047 "state": "online", 00:25:18.047 "raid_level": "raid0", 00:25:18.047 "superblock": true, 00:25:18.047 "num_base_bdevs": 2, 00:25:18.047 "num_base_bdevs_discovered": 2, 00:25:18.047 "num_base_bdevs_operational": 2, 00:25:18.047 "base_bdevs_list": [ 00:25:18.047 { 00:25:18.047 "name": "BaseBdev1", 00:25:18.047 "uuid": "fed4ce39-eeb4-5f1a-99da-934e0a9d1ced", 00:25:18.047 "is_configured": true, 00:25:18.047 "data_offset": 2048, 00:25:18.047 "data_size": 63488 00:25:18.047 }, 00:25:18.047 { 00:25:18.047 "name": "BaseBdev2", 00:25:18.047 "uuid": "78a9d73f-421d-574b-9a76-59952d366edf", 00:25:18.047 "is_configured": true, 00:25:18.047 "data_offset": 2048, 00:25:18.047 "data_size": 63488 00:25:18.047 } 00:25:18.047 ] 00:25:18.047 }' 00:25:18.047 15:54:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:18.047 15:54:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.305 15:54:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:18.305 15:54:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.305 15:54:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.305 [2024-11-05 15:54:50.551206] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:18.305 [2024-11-05 15:54:50.551236] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:18.305 [2024-11-05 15:54:50.553721] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:18.305 [2024-11-05 15:54:50.553889] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:18.305 [2024-11-05 15:54:50.553945] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:18.305 [2024-11-05 15:54:50.553956] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:25:18.305 { 00:25:18.305 "results": [ 00:25:18.305 { 00:25:18.305 "job": "raid_bdev1", 00:25:18.305 "core_mask": "0x1", 00:25:18.305 "workload": "randrw", 00:25:18.305 "percentage": 50, 00:25:18.305 "status": "finished", 00:25:18.305 "queue_depth": 1, 00:25:18.305 "io_size": 131072, 00:25:18.305 "runtime": 1.231153, 00:25:18.305 "iops": 18229.253390927042, 00:25:18.305 "mibps": 2278.6566738658803, 00:25:18.305 "io_failed": 1, 00:25:18.305 "io_timeout": 0, 00:25:18.305 "avg_latency_us": 75.21731105109468, 00:25:18.305 "min_latency_us": 26.38769230769231, 00:25:18.305 "max_latency_us": 1367.4338461538462 00:25:18.305 } 00:25:18.305 ], 00:25:18.305 "core_count": 1 00:25:18.305 } 00:25:18.305 15:54:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.305 15:54:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 60124 00:25:18.305 15:54:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 60124 ']' 00:25:18.305 15:54:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 60124 00:25:18.305 15:54:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:25:18.305 15:54:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:18.305 15:54:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60124 00:25:18.305 killing process with pid 60124 00:25:18.305 15:54:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:18.305 15:54:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:18.305 15:54:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60124' 00:25:18.305 15:54:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 60124 00:25:18.305 15:54:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 60124 00:25:18.305 [2024-11-05 15:54:50.580873] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:18.305 [2024-11-05 15:54:50.647910] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:18.894 15:54:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tj3iS3lkMN 00:25:18.894 15:54:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:25:18.894 15:54:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:25:18.894 15:54:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.81 00:25:18.894 15:54:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:25:18.894 15:54:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:18.894 15:54:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:25:18.894 15:54:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.81 != \0\.\0\0 ]] 00:25:18.894 00:25:18.894 real 0m3.327s 00:25:18.894 user 0m4.034s 00:25:18.894 sys 0m0.346s 00:25:18.894 ************************************ 00:25:18.894 END TEST raid_write_error_test 00:25:18.894 ************************************ 00:25:18.894 15:54:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:18.894 15:54:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.894 15:54:51 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:25:18.894 15:54:51 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:25:18.894 15:54:51 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:25:18.894 15:54:51 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:18.894 15:54:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:18.894 ************************************ 00:25:18.894 START TEST raid_state_function_test 00:25:18.894 ************************************ 00:25:18.894 15:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 false 00:25:18.894 15:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:25:18.894 15:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:25:18.894 15:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:25:18.894 15:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:25:18.894 15:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:25:18.894 15:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:18.894 15:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:25:18.894 15:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:18.894 15:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:18.894 15:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:25:18.894 15:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:18.894 15:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:18.894 15:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:18.894 15:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:25:18.894 15:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:25:18.894 15:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:25:18.894 15:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:25:18.894 15:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:25:18.894 15:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:25:18.894 15:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:25:18.894 15:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:25:18.894 15:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:25:18.894 15:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:25:18.894 Process raid pid: 60251 00:25:18.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.894 15:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60251 00:25:18.894 15:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60251' 00:25:18.894 15:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60251 00:25:18.894 15:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 60251 ']' 00:25:18.894 15:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.894 15:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:18.894 15:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.894 15:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:18.894 15:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.894 15:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:19.152 [2024-11-05 15:54:51.362036] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:25:19.152 [2024-11-05 15:54:51.362155] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:19.152 [2024-11-05 15:54:51.523882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.409 [2024-11-05 15:54:51.623770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.410 [2024-11-05 15:54:51.759798] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:19.410 [2024-11-05 15:54:51.759857] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:19.975 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:19.975 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:25:19.975 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:19.975 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.975 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:19.975 [2024-11-05 15:54:52.211953] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:19.975 [2024-11-05 15:54:52.212010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:19.975 [2024-11-05 15:54:52.212021] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:19.975 [2024-11-05 15:54:52.212030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:19.975 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.975 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:25:19.975 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:19.975 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:19.975 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:19.975 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:19.975 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:19.975 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:19.975 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:19.975 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:19.975 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:19.975 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:19.975 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:19.975 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.975 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:19.975 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.975 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:19.975 "name": "Existed_Raid", 00:25:19.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.975 "strip_size_kb": 64, 00:25:19.975 "state": "configuring", 00:25:19.975 "raid_level": "concat", 00:25:19.975 "superblock": false, 00:25:19.975 "num_base_bdevs": 2, 00:25:19.975 "num_base_bdevs_discovered": 0, 00:25:19.975 "num_base_bdevs_operational": 2, 00:25:19.975 "base_bdevs_list": [ 00:25:19.975 { 00:25:19.975 "name": "BaseBdev1", 00:25:19.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.975 "is_configured": false, 00:25:19.975 "data_offset": 0, 00:25:19.975 "data_size": 0 00:25:19.975 }, 00:25:19.975 { 00:25:19.975 "name": "BaseBdev2", 00:25:19.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.975 "is_configured": false, 00:25:19.975 "data_offset": 0, 00:25:19.975 "data_size": 0 00:25:19.975 } 00:25:19.975 ] 00:25:19.975 }' 00:25:19.975 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:19.975 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.233 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.234 [2024-11-05 15:54:52.531975] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:20.234 [2024-11-05 15:54:52.532005] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.234 [2024-11-05 15:54:52.539975] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:20.234 [2024-11-05 15:54:52.540092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:20.234 [2024-11-05 15:54:52.540154] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:20.234 [2024-11-05 15:54:52.540184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.234 [2024-11-05 15:54:52.572233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:20.234 BaseBdev1 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.234 [ 00:25:20.234 { 00:25:20.234 "name": "BaseBdev1", 00:25:20.234 "aliases": [ 00:25:20.234 "c84fd240-0893-4249-a72c-b70c84622d3b" 00:25:20.234 ], 00:25:20.234 "product_name": "Malloc disk", 00:25:20.234 "block_size": 512, 00:25:20.234 "num_blocks": 65536, 00:25:20.234 "uuid": "c84fd240-0893-4249-a72c-b70c84622d3b", 00:25:20.234 "assigned_rate_limits": { 00:25:20.234 "rw_ios_per_sec": 0, 00:25:20.234 "rw_mbytes_per_sec": 0, 00:25:20.234 "r_mbytes_per_sec": 0, 00:25:20.234 "w_mbytes_per_sec": 0 00:25:20.234 }, 00:25:20.234 "claimed": true, 00:25:20.234 "claim_type": "exclusive_write", 00:25:20.234 "zoned": false, 00:25:20.234 "supported_io_types": { 00:25:20.234 "read": true, 00:25:20.234 "write": true, 00:25:20.234 "unmap": true, 00:25:20.234 "flush": true, 00:25:20.234 "reset": true, 00:25:20.234 "nvme_admin": false, 00:25:20.234 "nvme_io": false, 00:25:20.234 "nvme_io_md": false, 00:25:20.234 "write_zeroes": true, 00:25:20.234 "zcopy": true, 00:25:20.234 "get_zone_info": false, 00:25:20.234 "zone_management": false, 00:25:20.234 "zone_append": false, 00:25:20.234 "compare": false, 00:25:20.234 "compare_and_write": false, 00:25:20.234 "abort": true, 00:25:20.234 "seek_hole": false, 00:25:20.234 "seek_data": false, 00:25:20.234 "copy": true, 00:25:20.234 "nvme_iov_md": false 00:25:20.234 }, 00:25:20.234 "memory_domains": [ 00:25:20.234 { 00:25:20.234 "dma_device_id": "system", 00:25:20.234 "dma_device_type": 1 00:25:20.234 }, 00:25:20.234 { 00:25:20.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:20.234 "dma_device_type": 2 00:25:20.234 } 00:25:20.234 ], 00:25:20.234 "driver_specific": {} 00:25:20.234 } 00:25:20.234 ] 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:20.234 "name": "Existed_Raid", 00:25:20.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:20.234 "strip_size_kb": 64, 00:25:20.234 "state": "configuring", 00:25:20.234 "raid_level": "concat", 00:25:20.234 "superblock": false, 00:25:20.234 "num_base_bdevs": 2, 00:25:20.234 "num_base_bdevs_discovered": 1, 00:25:20.234 "num_base_bdevs_operational": 2, 00:25:20.234 "base_bdevs_list": [ 00:25:20.234 { 00:25:20.234 "name": "BaseBdev1", 00:25:20.234 "uuid": "c84fd240-0893-4249-a72c-b70c84622d3b", 00:25:20.234 "is_configured": true, 00:25:20.234 "data_offset": 0, 00:25:20.234 "data_size": 65536 00:25:20.234 }, 00:25:20.234 { 00:25:20.234 "name": "BaseBdev2", 00:25:20.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:20.234 "is_configured": false, 00:25:20.234 "data_offset": 0, 00:25:20.234 "data_size": 0 00:25:20.234 } 00:25:20.234 ] 00:25:20.234 }' 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:20.234 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.492 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:20.492 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.492 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.492 [2024-11-05 15:54:52.888340] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:20.492 [2024-11-05 15:54:52.888383] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:25:20.492 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.492 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:20.492 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.492 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.492 [2024-11-05 15:54:52.896389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:20.492 [2024-11-05 15:54:52.898303] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:20.492 [2024-11-05 15:54:52.898433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:20.492 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.492 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:25:20.492 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:20.492 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:25:20.492 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:20.492 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:20.492 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:20.492 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:20.492 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:20.492 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:20.492 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:20.492 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:20.492 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:20.492 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:20.493 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:20.493 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.493 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.750 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.750 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:20.750 "name": "Existed_Raid", 00:25:20.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:20.750 "strip_size_kb": 64, 00:25:20.750 "state": "configuring", 00:25:20.750 "raid_level": "concat", 00:25:20.750 "superblock": false, 00:25:20.750 "num_base_bdevs": 2, 00:25:20.750 "num_base_bdevs_discovered": 1, 00:25:20.750 "num_base_bdevs_operational": 2, 00:25:20.750 "base_bdevs_list": [ 00:25:20.750 { 00:25:20.750 "name": "BaseBdev1", 00:25:20.750 "uuid": "c84fd240-0893-4249-a72c-b70c84622d3b", 00:25:20.750 "is_configured": true, 00:25:20.750 "data_offset": 0, 00:25:20.750 "data_size": 65536 00:25:20.750 }, 00:25:20.750 { 00:25:20.750 "name": "BaseBdev2", 00:25:20.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:20.750 "is_configured": false, 00:25:20.750 "data_offset": 0, 00:25:20.750 "data_size": 0 00:25:20.750 } 00:25:20.750 ] 00:25:20.750 }' 00:25:20.750 15:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:20.750 15:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.009 [2024-11-05 15:54:53.247547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:21.009 [2024-11-05 15:54:53.247595] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:21.009 [2024-11-05 15:54:53.247604] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:25:21.009 [2024-11-05 15:54:53.247890] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:21.009 [2024-11-05 15:54:53.248034] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:21.009 [2024-11-05 15:54:53.248047] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:25:21.009 [2024-11-05 15:54:53.248287] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:21.009 BaseBdev2 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.009 [ 00:25:21.009 { 00:25:21.009 "name": "BaseBdev2", 00:25:21.009 "aliases": [ 00:25:21.009 "af7dcbb4-f951-46dd-88d1-f4e165e9b72a" 00:25:21.009 ], 00:25:21.009 "product_name": "Malloc disk", 00:25:21.009 "block_size": 512, 00:25:21.009 "num_blocks": 65536, 00:25:21.009 "uuid": "af7dcbb4-f951-46dd-88d1-f4e165e9b72a", 00:25:21.009 "assigned_rate_limits": { 00:25:21.009 "rw_ios_per_sec": 0, 00:25:21.009 "rw_mbytes_per_sec": 0, 00:25:21.009 "r_mbytes_per_sec": 0, 00:25:21.009 "w_mbytes_per_sec": 0 00:25:21.009 }, 00:25:21.009 "claimed": true, 00:25:21.009 "claim_type": "exclusive_write", 00:25:21.009 "zoned": false, 00:25:21.009 "supported_io_types": { 00:25:21.009 "read": true, 00:25:21.009 "write": true, 00:25:21.009 "unmap": true, 00:25:21.009 "flush": true, 00:25:21.009 "reset": true, 00:25:21.009 "nvme_admin": false, 00:25:21.009 "nvme_io": false, 00:25:21.009 "nvme_io_md": false, 00:25:21.009 "write_zeroes": true, 00:25:21.009 "zcopy": true, 00:25:21.009 "get_zone_info": false, 00:25:21.009 "zone_management": false, 00:25:21.009 "zone_append": false, 00:25:21.009 "compare": false, 00:25:21.009 "compare_and_write": false, 00:25:21.009 "abort": true, 00:25:21.009 "seek_hole": false, 00:25:21.009 "seek_data": false, 00:25:21.009 "copy": true, 00:25:21.009 "nvme_iov_md": false 00:25:21.009 }, 00:25:21.009 "memory_domains": [ 00:25:21.009 { 00:25:21.009 "dma_device_id": "system", 00:25:21.009 "dma_device_type": 1 00:25:21.009 }, 00:25:21.009 { 00:25:21.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:21.009 "dma_device_type": 2 00:25:21.009 } 00:25:21.009 ], 00:25:21.009 "driver_specific": {} 00:25:21.009 } 00:25:21.009 ] 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:21.009 "name": "Existed_Raid", 00:25:21.009 "uuid": "2872379f-7a4c-49f9-adda-ff70cf7f8239", 00:25:21.009 "strip_size_kb": 64, 00:25:21.009 "state": "online", 00:25:21.009 "raid_level": "concat", 00:25:21.009 "superblock": false, 00:25:21.009 "num_base_bdevs": 2, 00:25:21.009 "num_base_bdevs_discovered": 2, 00:25:21.009 "num_base_bdevs_operational": 2, 00:25:21.009 "base_bdevs_list": [ 00:25:21.009 { 00:25:21.009 "name": "BaseBdev1", 00:25:21.009 "uuid": "c84fd240-0893-4249-a72c-b70c84622d3b", 00:25:21.009 "is_configured": true, 00:25:21.009 "data_offset": 0, 00:25:21.009 "data_size": 65536 00:25:21.009 }, 00:25:21.009 { 00:25:21.009 "name": "BaseBdev2", 00:25:21.009 "uuid": "af7dcbb4-f951-46dd-88d1-f4e165e9b72a", 00:25:21.009 "is_configured": true, 00:25:21.009 "data_offset": 0, 00:25:21.009 "data_size": 65536 00:25:21.009 } 00:25:21.009 ] 00:25:21.009 }' 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:21.009 15:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.268 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:25:21.268 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:21.268 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:21.268 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:21.268 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:21.268 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:21.268 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:21.268 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:21.268 15:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.268 15:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.268 [2024-11-05 15:54:53.571962] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:21.268 15:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.268 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:21.268 "name": "Existed_Raid", 00:25:21.268 "aliases": [ 00:25:21.268 "2872379f-7a4c-49f9-adda-ff70cf7f8239" 00:25:21.268 ], 00:25:21.268 "product_name": "Raid Volume", 00:25:21.268 "block_size": 512, 00:25:21.268 "num_blocks": 131072, 00:25:21.268 "uuid": "2872379f-7a4c-49f9-adda-ff70cf7f8239", 00:25:21.268 "assigned_rate_limits": { 00:25:21.268 "rw_ios_per_sec": 0, 00:25:21.268 "rw_mbytes_per_sec": 0, 00:25:21.268 "r_mbytes_per_sec": 0, 00:25:21.268 "w_mbytes_per_sec": 0 00:25:21.268 }, 00:25:21.268 "claimed": false, 00:25:21.268 "zoned": false, 00:25:21.268 "supported_io_types": { 00:25:21.268 "read": true, 00:25:21.268 "write": true, 00:25:21.268 "unmap": true, 00:25:21.268 "flush": true, 00:25:21.268 "reset": true, 00:25:21.268 "nvme_admin": false, 00:25:21.268 "nvme_io": false, 00:25:21.268 "nvme_io_md": false, 00:25:21.268 "write_zeroes": true, 00:25:21.268 "zcopy": false, 00:25:21.268 "get_zone_info": false, 00:25:21.268 "zone_management": false, 00:25:21.268 "zone_append": false, 00:25:21.268 "compare": false, 00:25:21.268 "compare_and_write": false, 00:25:21.268 "abort": false, 00:25:21.268 "seek_hole": false, 00:25:21.268 "seek_data": false, 00:25:21.268 "copy": false, 00:25:21.268 "nvme_iov_md": false 00:25:21.268 }, 00:25:21.268 "memory_domains": [ 00:25:21.268 { 00:25:21.268 "dma_device_id": "system", 00:25:21.268 "dma_device_type": 1 00:25:21.268 }, 00:25:21.268 { 00:25:21.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:21.268 "dma_device_type": 2 00:25:21.268 }, 00:25:21.268 { 00:25:21.268 "dma_device_id": "system", 00:25:21.268 "dma_device_type": 1 00:25:21.268 }, 00:25:21.268 { 00:25:21.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:21.268 "dma_device_type": 2 00:25:21.268 } 00:25:21.268 ], 00:25:21.268 "driver_specific": { 00:25:21.268 "raid": { 00:25:21.268 "uuid": "2872379f-7a4c-49f9-adda-ff70cf7f8239", 00:25:21.268 "strip_size_kb": 64, 00:25:21.268 "state": "online", 00:25:21.268 "raid_level": "concat", 00:25:21.268 "superblock": false, 00:25:21.268 "num_base_bdevs": 2, 00:25:21.268 "num_base_bdevs_discovered": 2, 00:25:21.268 "num_base_bdevs_operational": 2, 00:25:21.268 "base_bdevs_list": [ 00:25:21.268 { 00:25:21.268 "name": "BaseBdev1", 00:25:21.268 "uuid": "c84fd240-0893-4249-a72c-b70c84622d3b", 00:25:21.268 "is_configured": true, 00:25:21.268 "data_offset": 0, 00:25:21.268 "data_size": 65536 00:25:21.268 }, 00:25:21.268 { 00:25:21.268 "name": "BaseBdev2", 00:25:21.268 "uuid": "af7dcbb4-f951-46dd-88d1-f4e165e9b72a", 00:25:21.268 "is_configured": true, 00:25:21.268 "data_offset": 0, 00:25:21.268 "data_size": 65536 00:25:21.268 } 00:25:21.268 ] 00:25:21.268 } 00:25:21.268 } 00:25:21.268 }' 00:25:21.268 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:21.268 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:25:21.268 BaseBdev2' 00:25:21.268 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:21.268 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:21.268 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:21.268 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:25:21.268 15:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.268 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:21.268 15:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.268 15:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.526 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:21.526 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:21.526 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:21.526 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:21.526 15:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.526 15:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.526 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:21.526 15:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.526 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:21.526 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:21.526 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:21.526 15:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.526 15:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.526 [2024-11-05 15:54:53.731756] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:21.526 [2024-11-05 15:54:53.731786] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:21.526 [2024-11-05 15:54:53.731833] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:21.526 15:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.526 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:25:21.526 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:25:21.526 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:21.526 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:25:21.526 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:25:21.526 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:25:21.526 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:21.526 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:25:21.526 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:21.526 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:21.526 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:21.526 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:21.526 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:21.526 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:21.526 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:21.526 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:21.526 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:21.526 15:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.526 15:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.526 15:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.526 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:21.526 "name": "Existed_Raid", 00:25:21.526 "uuid": "2872379f-7a4c-49f9-adda-ff70cf7f8239", 00:25:21.526 "strip_size_kb": 64, 00:25:21.526 "state": "offline", 00:25:21.526 "raid_level": "concat", 00:25:21.526 "superblock": false, 00:25:21.526 "num_base_bdevs": 2, 00:25:21.526 "num_base_bdevs_discovered": 1, 00:25:21.526 "num_base_bdevs_operational": 1, 00:25:21.526 "base_bdevs_list": [ 00:25:21.526 { 00:25:21.526 "name": null, 00:25:21.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:21.526 "is_configured": false, 00:25:21.527 "data_offset": 0, 00:25:21.527 "data_size": 65536 00:25:21.527 }, 00:25:21.527 { 00:25:21.527 "name": "BaseBdev2", 00:25:21.527 "uuid": "af7dcbb4-f951-46dd-88d1-f4e165e9b72a", 00:25:21.527 "is_configured": true, 00:25:21.527 "data_offset": 0, 00:25:21.527 "data_size": 65536 00:25:21.527 } 00:25:21.527 ] 00:25:21.527 }' 00:25:21.527 15:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:21.527 15:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.783 15:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:25:21.783 15:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:21.783 15:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:21.783 15:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:21.783 15:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.783 15:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.783 15:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.783 15:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:21.783 15:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:21.783 15:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:25:21.783 15:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.783 15:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.783 [2024-11-05 15:54:54.139050] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:21.783 [2024-11-05 15:54:54.139099] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:25:22.056 15:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.056 15:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:22.056 15:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:22.056 15:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:22.056 15:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:25:22.056 15:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.056 15:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.056 15:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.056 15:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:25:22.056 15:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:25:22.056 15:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:25:22.056 15:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60251 00:25:22.056 15:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 60251 ']' 00:25:22.056 15:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 60251 00:25:22.056 15:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:25:22.056 15:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:22.056 15:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60251 00:25:22.056 killing process with pid 60251 00:25:22.056 15:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:22.056 15:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:22.056 15:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60251' 00:25:22.056 15:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 60251 00:25:22.056 [2024-11-05 15:54:54.261369] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:22.056 15:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 60251 00:25:22.056 [2024-11-05 15:54:54.271981] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:22.622 ************************************ 00:25:22.622 END TEST raid_state_function_test 00:25:22.622 ************************************ 00:25:22.622 15:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:25:22.622 00:25:22.622 real 0m3.691s 00:25:22.622 user 0m5.288s 00:25:22.622 sys 0m0.581s 00:25:22.622 15:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:22.622 15:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.622 15:54:55 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:25:22.622 15:54:55 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:25:22.622 15:54:55 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:22.622 15:54:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:22.622 ************************************ 00:25:22.622 START TEST raid_state_function_test_sb 00:25:22.622 ************************************ 00:25:22.622 15:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 true 00:25:22.622 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:25:22.622 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:25:22.622 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:25:22.622 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:25:22.622 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:25:22.622 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:22.622 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:25:22.622 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:22.622 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:22.622 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:25:22.622 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:22.622 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:22.622 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:22.622 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:25:22.622 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:25:22.622 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:25:22.622 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:25:22.622 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:25:22.622 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:25:22.622 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:25:22.622 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:25:22.623 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:25:22.623 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:25:22.623 Process raid pid: 60493 00:25:22.623 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60493 00:25:22.623 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60493' 00:25:22.623 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60493 00:25:22.623 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:22.623 15:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 60493 ']' 00:25:22.880 15:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:22.880 15:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:22.880 15:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:22.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:22.880 15:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:22.880 15:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:22.880 [2024-11-05 15:54:55.098099] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:25:22.881 [2024-11-05 15:54:55.098353] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:22.881 [2024-11-05 15:54:55.257543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.138 [2024-11-05 15:54:55.359096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.138 [2024-11-05 15:54:55.498491] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:23.138 [2024-11-05 15:54:55.498677] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:23.704 15:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:23.704 15:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:25:23.704 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:23.705 15:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.705 15:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:23.705 [2024-11-05 15:54:55.920863] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:23.705 [2024-11-05 15:54:55.920911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:23.705 [2024-11-05 15:54:55.920922] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:23.705 [2024-11-05 15:54:55.920932] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:23.705 15:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.705 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:25:23.705 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:23.705 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:23.705 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:23.705 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:23.705 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:23.705 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:23.705 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:23.705 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:23.705 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:23.705 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:23.705 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:23.705 15:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.705 15:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:23.705 15:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.705 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:23.705 "name": "Existed_Raid", 00:25:23.705 "uuid": "2cf07448-a9aa-43a4-9f7b-b614ec267431", 00:25:23.705 "strip_size_kb": 64, 00:25:23.705 "state": "configuring", 00:25:23.705 "raid_level": "concat", 00:25:23.705 "superblock": true, 00:25:23.705 "num_base_bdevs": 2, 00:25:23.705 "num_base_bdevs_discovered": 0, 00:25:23.705 "num_base_bdevs_operational": 2, 00:25:23.705 "base_bdevs_list": [ 00:25:23.705 { 00:25:23.705 "name": "BaseBdev1", 00:25:23.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:23.705 "is_configured": false, 00:25:23.705 "data_offset": 0, 00:25:23.705 "data_size": 0 00:25:23.705 }, 00:25:23.705 { 00:25:23.705 "name": "BaseBdev2", 00:25:23.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:23.705 "is_configured": false, 00:25:23.705 "data_offset": 0, 00:25:23.705 "data_size": 0 00:25:23.705 } 00:25:23.705 ] 00:25:23.705 }' 00:25:23.705 15:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:23.705 15:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:23.963 [2024-11-05 15:54:56.232879] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:23.963 [2024-11-05 15:54:56.232908] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:23.963 [2024-11-05 15:54:56.240884] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:23.963 [2024-11-05 15:54:56.240920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:23.963 [2024-11-05 15:54:56.240929] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:23.963 [2024-11-05 15:54:56.240940] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:23.963 [2024-11-05 15:54:56.273612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:23.963 BaseBdev1 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:23.963 [ 00:25:23.963 { 00:25:23.963 "name": "BaseBdev1", 00:25:23.963 "aliases": [ 00:25:23.963 "684d94a4-120a-420c-a69a-f1a1eabeb084" 00:25:23.963 ], 00:25:23.963 "product_name": "Malloc disk", 00:25:23.963 "block_size": 512, 00:25:23.963 "num_blocks": 65536, 00:25:23.963 "uuid": "684d94a4-120a-420c-a69a-f1a1eabeb084", 00:25:23.963 "assigned_rate_limits": { 00:25:23.963 "rw_ios_per_sec": 0, 00:25:23.963 "rw_mbytes_per_sec": 0, 00:25:23.963 "r_mbytes_per_sec": 0, 00:25:23.963 "w_mbytes_per_sec": 0 00:25:23.963 }, 00:25:23.963 "claimed": true, 00:25:23.963 "claim_type": "exclusive_write", 00:25:23.963 "zoned": false, 00:25:23.963 "supported_io_types": { 00:25:23.963 "read": true, 00:25:23.963 "write": true, 00:25:23.963 "unmap": true, 00:25:23.963 "flush": true, 00:25:23.963 "reset": true, 00:25:23.963 "nvme_admin": false, 00:25:23.963 "nvme_io": false, 00:25:23.963 "nvme_io_md": false, 00:25:23.963 "write_zeroes": true, 00:25:23.963 "zcopy": true, 00:25:23.963 "get_zone_info": false, 00:25:23.963 "zone_management": false, 00:25:23.963 "zone_append": false, 00:25:23.963 "compare": false, 00:25:23.963 "compare_and_write": false, 00:25:23.963 "abort": true, 00:25:23.963 "seek_hole": false, 00:25:23.963 "seek_data": false, 00:25:23.963 "copy": true, 00:25:23.963 "nvme_iov_md": false 00:25:23.963 }, 00:25:23.963 "memory_domains": [ 00:25:23.963 { 00:25:23.963 "dma_device_id": "system", 00:25:23.963 "dma_device_type": 1 00:25:23.963 }, 00:25:23.963 { 00:25:23.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:23.963 "dma_device_type": 2 00:25:23.963 } 00:25:23.963 ], 00:25:23.963 "driver_specific": {} 00:25:23.963 } 00:25:23.963 ] 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:23.963 "name": "Existed_Raid", 00:25:23.963 "uuid": "0f8d5299-0891-4148-b1d7-cf38f2b2971e", 00:25:23.963 "strip_size_kb": 64, 00:25:23.963 "state": "configuring", 00:25:23.963 "raid_level": "concat", 00:25:23.963 "superblock": true, 00:25:23.963 "num_base_bdevs": 2, 00:25:23.963 "num_base_bdevs_discovered": 1, 00:25:23.963 "num_base_bdevs_operational": 2, 00:25:23.963 "base_bdevs_list": [ 00:25:23.963 { 00:25:23.963 "name": "BaseBdev1", 00:25:23.963 "uuid": "684d94a4-120a-420c-a69a-f1a1eabeb084", 00:25:23.963 "is_configured": true, 00:25:23.963 "data_offset": 2048, 00:25:23.963 "data_size": 63488 00:25:23.963 }, 00:25:23.963 { 00:25:23.963 "name": "BaseBdev2", 00:25:23.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:23.963 "is_configured": false, 00:25:23.963 "data_offset": 0, 00:25:23.963 "data_size": 0 00:25:23.963 } 00:25:23.963 ] 00:25:23.963 }' 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:23.963 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:24.221 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:24.221 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.221 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:24.221 [2024-11-05 15:54:56.601733] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:24.221 [2024-11-05 15:54:56.601892] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:25:24.221 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.221 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:24.221 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.221 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:24.221 [2024-11-05 15:54:56.609786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:24.221 [2024-11-05 15:54:56.611709] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:24.221 [2024-11-05 15:54:56.611824] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:24.221 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.221 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:25:24.221 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:24.221 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:25:24.221 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:24.221 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:24.221 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:24.221 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:24.221 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:24.221 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:24.221 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:24.221 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:24.221 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:24.221 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:24.221 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:24.221 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.221 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:24.221 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.478 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:24.478 "name": "Existed_Raid", 00:25:24.478 "uuid": "3ed13e76-da43-4d3e-a51e-d12397451793", 00:25:24.478 "strip_size_kb": 64, 00:25:24.478 "state": "configuring", 00:25:24.478 "raid_level": "concat", 00:25:24.478 "superblock": true, 00:25:24.478 "num_base_bdevs": 2, 00:25:24.478 "num_base_bdevs_discovered": 1, 00:25:24.478 "num_base_bdevs_operational": 2, 00:25:24.478 "base_bdevs_list": [ 00:25:24.478 { 00:25:24.478 "name": "BaseBdev1", 00:25:24.478 "uuid": "684d94a4-120a-420c-a69a-f1a1eabeb084", 00:25:24.478 "is_configured": true, 00:25:24.478 "data_offset": 2048, 00:25:24.478 "data_size": 63488 00:25:24.478 }, 00:25:24.479 { 00:25:24.479 "name": "BaseBdev2", 00:25:24.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:24.479 "is_configured": false, 00:25:24.479 "data_offset": 0, 00:25:24.479 "data_size": 0 00:25:24.479 } 00:25:24.479 ] 00:25:24.479 }' 00:25:24.479 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:24.479 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:24.736 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:24.736 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.736 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:24.736 [2024-11-05 15:54:56.932359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:24.736 [2024-11-05 15:54:56.932535] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:24.737 [2024-11-05 15:54:56.932546] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:24.737 [2024-11-05 15:54:56.932754] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:24.737 BaseBdev2 00:25:24.737 [2024-11-05 15:54:56.932876] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:24.737 [2024-11-05 15:54:56.932885] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:25:24.737 [2024-11-05 15:54:56.932987] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:24.737 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.737 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:25:24.737 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:25:24.737 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:24.737 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:25:24.737 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:24.737 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:24.737 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:25:24.737 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.737 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:24.737 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.737 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:24.737 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.737 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:24.737 [ 00:25:24.737 { 00:25:24.737 "name": "BaseBdev2", 00:25:24.737 "aliases": [ 00:25:24.737 "9d2ce476-ced6-42ce-95d4-2477348a15b3" 00:25:24.737 ], 00:25:24.737 "product_name": "Malloc disk", 00:25:24.737 "block_size": 512, 00:25:24.737 "num_blocks": 65536, 00:25:24.737 "uuid": "9d2ce476-ced6-42ce-95d4-2477348a15b3", 00:25:24.737 "assigned_rate_limits": { 00:25:24.737 "rw_ios_per_sec": 0, 00:25:24.737 "rw_mbytes_per_sec": 0, 00:25:24.737 "r_mbytes_per_sec": 0, 00:25:24.737 "w_mbytes_per_sec": 0 00:25:24.737 }, 00:25:24.737 "claimed": true, 00:25:24.737 "claim_type": "exclusive_write", 00:25:24.737 "zoned": false, 00:25:24.737 "supported_io_types": { 00:25:24.737 "read": true, 00:25:24.737 "write": true, 00:25:24.737 "unmap": true, 00:25:24.737 "flush": true, 00:25:24.737 "reset": true, 00:25:24.737 "nvme_admin": false, 00:25:24.737 "nvme_io": false, 00:25:24.737 "nvme_io_md": false, 00:25:24.737 "write_zeroes": true, 00:25:24.737 "zcopy": true, 00:25:24.737 "get_zone_info": false, 00:25:24.737 "zone_management": false, 00:25:24.737 "zone_append": false, 00:25:24.737 "compare": false, 00:25:24.737 "compare_and_write": false, 00:25:24.737 "abort": true, 00:25:24.737 "seek_hole": false, 00:25:24.737 "seek_data": false, 00:25:24.737 "copy": true, 00:25:24.737 "nvme_iov_md": false 00:25:24.737 }, 00:25:24.737 "memory_domains": [ 00:25:24.737 { 00:25:24.737 "dma_device_id": "system", 00:25:24.737 "dma_device_type": 1 00:25:24.737 }, 00:25:24.737 { 00:25:24.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:24.737 "dma_device_type": 2 00:25:24.737 } 00:25:24.737 ], 00:25:24.737 "driver_specific": {} 00:25:24.737 } 00:25:24.737 ] 00:25:24.737 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.737 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:25:24.737 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:24.737 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:24.737 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:25:24.737 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:24.737 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:24.737 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:24.737 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:24.737 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:24.737 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:24.737 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:24.737 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:24.737 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:24.737 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:24.737 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:24.737 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.737 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:24.737 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.737 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:24.737 "name": "Existed_Raid", 00:25:24.737 "uuid": "3ed13e76-da43-4d3e-a51e-d12397451793", 00:25:24.737 "strip_size_kb": 64, 00:25:24.737 "state": "online", 00:25:24.737 "raid_level": "concat", 00:25:24.737 "superblock": true, 00:25:24.737 "num_base_bdevs": 2, 00:25:24.737 "num_base_bdevs_discovered": 2, 00:25:24.737 "num_base_bdevs_operational": 2, 00:25:24.737 "base_bdevs_list": [ 00:25:24.737 { 00:25:24.737 "name": "BaseBdev1", 00:25:24.737 "uuid": "684d94a4-120a-420c-a69a-f1a1eabeb084", 00:25:24.737 "is_configured": true, 00:25:24.737 "data_offset": 2048, 00:25:24.737 "data_size": 63488 00:25:24.737 }, 00:25:24.737 { 00:25:24.737 "name": "BaseBdev2", 00:25:24.737 "uuid": "9d2ce476-ced6-42ce-95d4-2477348a15b3", 00:25:24.737 "is_configured": true, 00:25:24.737 "data_offset": 2048, 00:25:24.737 "data_size": 63488 00:25:24.737 } 00:25:24.737 ] 00:25:24.737 }' 00:25:24.737 15:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:24.737 15:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:24.995 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:25:24.995 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:24.995 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:24.995 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:24.995 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:25:24.995 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:24.995 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:24.995 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:24.995 15:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.995 15:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:24.995 [2024-11-05 15:54:57.260714] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:24.995 15:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.995 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:24.995 "name": "Existed_Raid", 00:25:24.995 "aliases": [ 00:25:24.995 "3ed13e76-da43-4d3e-a51e-d12397451793" 00:25:24.995 ], 00:25:24.995 "product_name": "Raid Volume", 00:25:24.995 "block_size": 512, 00:25:24.995 "num_blocks": 126976, 00:25:24.995 "uuid": "3ed13e76-da43-4d3e-a51e-d12397451793", 00:25:24.995 "assigned_rate_limits": { 00:25:24.995 "rw_ios_per_sec": 0, 00:25:24.995 "rw_mbytes_per_sec": 0, 00:25:24.995 "r_mbytes_per_sec": 0, 00:25:24.995 "w_mbytes_per_sec": 0 00:25:24.995 }, 00:25:24.995 "claimed": false, 00:25:24.995 "zoned": false, 00:25:24.996 "supported_io_types": { 00:25:24.996 "read": true, 00:25:24.996 "write": true, 00:25:24.996 "unmap": true, 00:25:24.996 "flush": true, 00:25:24.996 "reset": true, 00:25:24.996 "nvme_admin": false, 00:25:24.996 "nvme_io": false, 00:25:24.996 "nvme_io_md": false, 00:25:24.996 "write_zeroes": true, 00:25:24.996 "zcopy": false, 00:25:24.996 "get_zone_info": false, 00:25:24.996 "zone_management": false, 00:25:24.996 "zone_append": false, 00:25:24.996 "compare": false, 00:25:24.996 "compare_and_write": false, 00:25:24.996 "abort": false, 00:25:24.996 "seek_hole": false, 00:25:24.996 "seek_data": false, 00:25:24.996 "copy": false, 00:25:24.996 "nvme_iov_md": false 00:25:24.996 }, 00:25:24.996 "memory_domains": [ 00:25:24.996 { 00:25:24.996 "dma_device_id": "system", 00:25:24.996 "dma_device_type": 1 00:25:24.996 }, 00:25:24.996 { 00:25:24.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:24.996 "dma_device_type": 2 00:25:24.996 }, 00:25:24.996 { 00:25:24.996 "dma_device_id": "system", 00:25:24.996 "dma_device_type": 1 00:25:24.996 }, 00:25:24.996 { 00:25:24.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:24.996 "dma_device_type": 2 00:25:24.996 } 00:25:24.996 ], 00:25:24.996 "driver_specific": { 00:25:24.996 "raid": { 00:25:24.996 "uuid": "3ed13e76-da43-4d3e-a51e-d12397451793", 00:25:24.996 "strip_size_kb": 64, 00:25:24.996 "state": "online", 00:25:24.996 "raid_level": "concat", 00:25:24.996 "superblock": true, 00:25:24.996 "num_base_bdevs": 2, 00:25:24.996 "num_base_bdevs_discovered": 2, 00:25:24.996 "num_base_bdevs_operational": 2, 00:25:24.996 "base_bdevs_list": [ 00:25:24.996 { 00:25:24.996 "name": "BaseBdev1", 00:25:24.996 "uuid": "684d94a4-120a-420c-a69a-f1a1eabeb084", 00:25:24.996 "is_configured": true, 00:25:24.996 "data_offset": 2048, 00:25:24.996 "data_size": 63488 00:25:24.996 }, 00:25:24.996 { 00:25:24.996 "name": "BaseBdev2", 00:25:24.996 "uuid": "9d2ce476-ced6-42ce-95d4-2477348a15b3", 00:25:24.996 "is_configured": true, 00:25:24.996 "data_offset": 2048, 00:25:24.996 "data_size": 63488 00:25:24.996 } 00:25:24.996 ] 00:25:24.996 } 00:25:24.996 } 00:25:24.996 }' 00:25:24.996 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:24.996 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:25:24.996 BaseBdev2' 00:25:24.996 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:24.996 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:24.996 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:24.996 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:24.996 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:25:24.996 15:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.996 15:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:24.996 15:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.996 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:24.996 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:24.996 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:24.996 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:24.996 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:24.996 15:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.996 15:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:24.996 15:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.253 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:25.254 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:25.254 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:25.254 15:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.254 15:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:25.254 [2024-11-05 15:54:57.428517] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:25.254 [2024-11-05 15:54:57.428543] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:25.254 [2024-11-05 15:54:57.428582] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:25.254 15:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.254 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:25:25.254 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:25:25.254 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:25.254 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:25:25.254 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:25:25.254 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:25:25.254 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:25.254 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:25:25.254 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:25.254 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:25.254 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:25.254 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:25.254 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:25.254 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:25.254 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:25.254 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:25.254 15:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.254 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:25.254 15:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:25.254 15:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.254 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:25.254 "name": "Existed_Raid", 00:25:25.254 "uuid": "3ed13e76-da43-4d3e-a51e-d12397451793", 00:25:25.254 "strip_size_kb": 64, 00:25:25.254 "state": "offline", 00:25:25.254 "raid_level": "concat", 00:25:25.254 "superblock": true, 00:25:25.254 "num_base_bdevs": 2, 00:25:25.254 "num_base_bdevs_discovered": 1, 00:25:25.254 "num_base_bdevs_operational": 1, 00:25:25.254 "base_bdevs_list": [ 00:25:25.254 { 00:25:25.254 "name": null, 00:25:25.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:25.254 "is_configured": false, 00:25:25.254 "data_offset": 0, 00:25:25.254 "data_size": 63488 00:25:25.254 }, 00:25:25.254 { 00:25:25.254 "name": "BaseBdev2", 00:25:25.254 "uuid": "9d2ce476-ced6-42ce-95d4-2477348a15b3", 00:25:25.254 "is_configured": true, 00:25:25.254 "data_offset": 2048, 00:25:25.254 "data_size": 63488 00:25:25.254 } 00:25:25.254 ] 00:25:25.254 }' 00:25:25.254 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:25.254 15:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:25.512 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:25:25.512 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:25.512 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:25.512 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:25.512 15:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.512 15:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:25.512 15:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.512 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:25.512 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:25.512 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:25:25.512 15:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.512 15:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:25.512 [2024-11-05 15:54:57.799463] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:25.512 [2024-11-05 15:54:57.799594] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:25:25.512 15:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.512 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:25.512 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:25.512 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:25.512 15:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.512 15:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:25.512 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:25:25.512 15:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.512 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:25:25.512 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:25:25.512 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:25:25.512 15:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60493 00:25:25.512 15:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 60493 ']' 00:25:25.512 15:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 60493 00:25:25.512 15:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:25:25.512 15:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:25.512 15:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60493 00:25:25.512 killing process with pid 60493 00:25:25.512 15:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:25.512 15:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:25.512 15:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60493' 00:25:25.512 15:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 60493 00:25:25.512 [2024-11-05 15:54:57.909272] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:25.512 15:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 60493 00:25:25.512 [2024-11-05 15:54:57.917695] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:26.078 ************************************ 00:25:26.078 END TEST raid_state_function_test_sb 00:25:26.078 ************************************ 00:25:26.078 15:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:25:26.078 00:25:26.078 real 0m3.459s 00:25:26.078 user 0m5.032s 00:25:26.078 sys 0m0.555s 00:25:26.078 15:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:26.078 15:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:26.335 15:54:58 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:25:26.335 15:54:58 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:25:26.335 15:54:58 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:26.335 15:54:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:26.335 ************************************ 00:25:26.335 START TEST raid_superblock_test 00:25:26.335 ************************************ 00:25:26.335 15:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 2 00:25:26.335 15:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:25:26.335 15:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:25:26.335 15:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:25:26.335 15:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:25:26.335 15:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:25:26.335 15:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:25:26.335 15:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:25:26.335 15:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:25:26.335 15:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:25:26.335 15:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:25:26.335 15:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:25:26.335 15:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:25:26.335 15:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:25:26.335 15:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:25:26.335 15:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:25:26.335 15:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:25:26.335 15:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=60723 00:25:26.335 15:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 60723 00:25:26.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:26.335 15:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 60723 ']' 00:25:26.335 15:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:25:26.335 15:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:26.335 15:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:26.335 15:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:26.335 15:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:26.335 15:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.335 [2024-11-05 15:54:58.596453] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:25:26.335 [2024-11-05 15:54:58.596711] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60723 ] 00:25:26.593 [2024-11-05 15:54:58.752343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.593 [2024-11-05 15:54:58.837804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.593 [2024-11-05 15:54:58.948737] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:26.593 [2024-11-05 15:54:58.948761] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.158 malloc1 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.158 [2024-11-05 15:54:59.439284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:27.158 [2024-11-05 15:54:59.439443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:27.158 [2024-11-05 15:54:59.439479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:27.158 [2024-11-05 15:54:59.439531] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:27.158 [2024-11-05 15:54:59.441311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:27.158 [2024-11-05 15:54:59.441414] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:27.158 pt1 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.158 malloc2 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.158 [2024-11-05 15:54:59.470800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:27.158 [2024-11-05 15:54:59.470934] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:27.158 [2024-11-05 15:54:59.470954] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:27.158 [2024-11-05 15:54:59.470961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:27.158 [2024-11-05 15:54:59.472637] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:27.158 [2024-11-05 15:54:59.472666] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:27.158 pt2 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.158 [2024-11-05 15:54:59.478859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:27.158 [2024-11-05 15:54:59.480439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:27.158 [2024-11-05 15:54:59.480630] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:27.158 [2024-11-05 15:54:59.480686] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:27.158 [2024-11-05 15:54:59.480913] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:27.158 [2024-11-05 15:54:59.481071] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:27.158 [2024-11-05 15:54:59.481125] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:27.158 [2024-11-05 15:54:59.481282] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.158 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:27.158 "name": "raid_bdev1", 00:25:27.158 "uuid": "5ecfc4d1-6b59-483c-86da-8dde03c19ea4", 00:25:27.158 "strip_size_kb": 64, 00:25:27.158 "state": "online", 00:25:27.158 "raid_level": "concat", 00:25:27.158 "superblock": true, 00:25:27.158 "num_base_bdevs": 2, 00:25:27.158 "num_base_bdevs_discovered": 2, 00:25:27.158 "num_base_bdevs_operational": 2, 00:25:27.158 "base_bdevs_list": [ 00:25:27.158 { 00:25:27.158 "name": "pt1", 00:25:27.158 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:27.158 "is_configured": true, 00:25:27.158 "data_offset": 2048, 00:25:27.158 "data_size": 63488 00:25:27.159 }, 00:25:27.159 { 00:25:27.159 "name": "pt2", 00:25:27.159 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:27.159 "is_configured": true, 00:25:27.159 "data_offset": 2048, 00:25:27.159 "data_size": 63488 00:25:27.159 } 00:25:27.159 ] 00:25:27.159 }' 00:25:27.159 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:27.159 15:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.416 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:25:27.417 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:27.417 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:27.417 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:27.417 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:27.417 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:27.417 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:27.417 15:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.417 15:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.417 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:27.417 [2024-11-05 15:54:59.803139] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:27.417 15:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.417 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:27.417 "name": "raid_bdev1", 00:25:27.417 "aliases": [ 00:25:27.417 "5ecfc4d1-6b59-483c-86da-8dde03c19ea4" 00:25:27.417 ], 00:25:27.417 "product_name": "Raid Volume", 00:25:27.417 "block_size": 512, 00:25:27.417 "num_blocks": 126976, 00:25:27.417 "uuid": "5ecfc4d1-6b59-483c-86da-8dde03c19ea4", 00:25:27.417 "assigned_rate_limits": { 00:25:27.417 "rw_ios_per_sec": 0, 00:25:27.417 "rw_mbytes_per_sec": 0, 00:25:27.417 "r_mbytes_per_sec": 0, 00:25:27.417 "w_mbytes_per_sec": 0 00:25:27.417 }, 00:25:27.417 "claimed": false, 00:25:27.417 "zoned": false, 00:25:27.417 "supported_io_types": { 00:25:27.417 "read": true, 00:25:27.417 "write": true, 00:25:27.417 "unmap": true, 00:25:27.417 "flush": true, 00:25:27.417 "reset": true, 00:25:27.417 "nvme_admin": false, 00:25:27.417 "nvme_io": false, 00:25:27.417 "nvme_io_md": false, 00:25:27.417 "write_zeroes": true, 00:25:27.417 "zcopy": false, 00:25:27.417 "get_zone_info": false, 00:25:27.417 "zone_management": false, 00:25:27.417 "zone_append": false, 00:25:27.417 "compare": false, 00:25:27.417 "compare_and_write": false, 00:25:27.417 "abort": false, 00:25:27.417 "seek_hole": false, 00:25:27.417 "seek_data": false, 00:25:27.417 "copy": false, 00:25:27.417 "nvme_iov_md": false 00:25:27.417 }, 00:25:27.417 "memory_domains": [ 00:25:27.417 { 00:25:27.417 "dma_device_id": "system", 00:25:27.417 "dma_device_type": 1 00:25:27.417 }, 00:25:27.417 { 00:25:27.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:27.417 "dma_device_type": 2 00:25:27.417 }, 00:25:27.417 { 00:25:27.417 "dma_device_id": "system", 00:25:27.417 "dma_device_type": 1 00:25:27.417 }, 00:25:27.417 { 00:25:27.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:27.417 "dma_device_type": 2 00:25:27.417 } 00:25:27.417 ], 00:25:27.417 "driver_specific": { 00:25:27.417 "raid": { 00:25:27.417 "uuid": "5ecfc4d1-6b59-483c-86da-8dde03c19ea4", 00:25:27.417 "strip_size_kb": 64, 00:25:27.417 "state": "online", 00:25:27.417 "raid_level": "concat", 00:25:27.417 "superblock": true, 00:25:27.417 "num_base_bdevs": 2, 00:25:27.417 "num_base_bdevs_discovered": 2, 00:25:27.417 "num_base_bdevs_operational": 2, 00:25:27.417 "base_bdevs_list": [ 00:25:27.417 { 00:25:27.417 "name": "pt1", 00:25:27.417 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:27.417 "is_configured": true, 00:25:27.417 "data_offset": 2048, 00:25:27.417 "data_size": 63488 00:25:27.417 }, 00:25:27.417 { 00:25:27.417 "name": "pt2", 00:25:27.417 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:27.417 "is_configured": true, 00:25:27.417 "data_offset": 2048, 00:25:27.417 "data_size": 63488 00:25:27.417 } 00:25:27.417 ] 00:25:27.417 } 00:25:27.417 } 00:25:27.417 }' 00:25:27.417 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:27.679 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:27.679 pt2' 00:25:27.679 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:27.679 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:27.679 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:27.679 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:27.679 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:27.679 15:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.679 15:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.679 15:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.679 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:27.679 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:27.679 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:27.679 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:27.679 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:27.679 15:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.679 15:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.679 15:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.679 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:27.679 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:27.679 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:27.679 15:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.679 15:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.679 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:25:27.679 [2024-11-05 15:54:59.971146] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:27.679 15:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.679 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5ecfc4d1-6b59-483c-86da-8dde03c19ea4 00:25:27.679 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5ecfc4d1-6b59-483c-86da-8dde03c19ea4 ']' 00:25:27.679 15:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:27.679 15:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.679 15:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.679 [2024-11-05 15:55:00.002946] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:27.679 [2024-11-05 15:55:00.002971] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:27.679 [2024-11-05 15:55:00.003053] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:27.679 [2024-11-05 15:55:00.003108] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:27.679 [2024-11-05 15:55:00.003121] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:27.679 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.679 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:27.679 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:25:27.679 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.679 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.679 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.679 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:25:27.679 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:25:27.679 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:27.679 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:25:27.679 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.679 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.679 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.679 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:27.679 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:25:27.679 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.679 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.679 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.679 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:27.679 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:25:27.679 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.679 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.679 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.679 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:25:27.679 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:27.679 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:25:27.679 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:27.679 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:27.679 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:27.679 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:27.679 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:27.679 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:27.679 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.679 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.937 [2024-11-05 15:55:00.095022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:27.937 [2024-11-05 15:55:00.097296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:27.937 [2024-11-05 15:55:00.097369] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:27.937 [2024-11-05 15:55:00.097428] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:27.937 [2024-11-05 15:55:00.097445] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:27.937 [2024-11-05 15:55:00.097459] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:25:27.937 request: 00:25:27.937 { 00:25:27.937 "name": "raid_bdev1", 00:25:27.937 "raid_level": "concat", 00:25:27.937 "base_bdevs": [ 00:25:27.937 "malloc1", 00:25:27.937 "malloc2" 00:25:27.937 ], 00:25:27.937 "strip_size_kb": 64, 00:25:27.937 "superblock": false, 00:25:27.937 "method": "bdev_raid_create", 00:25:27.937 "req_id": 1 00:25:27.937 } 00:25:27.937 Got JSON-RPC error response 00:25:27.937 response: 00:25:27.937 { 00:25:27.937 "code": -17, 00:25:27.937 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:27.937 } 00:25:27.937 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:27.937 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:25:27.937 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:27.937 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:27.937 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:27.937 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:27.937 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.937 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.937 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:25:27.937 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.937 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:25:27.937 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:25:27.937 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:27.937 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.937 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.937 [2024-11-05 15:55:00.134956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:27.937 [2024-11-05 15:55:00.135002] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:27.937 [2024-11-05 15:55:00.135016] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:27.937 [2024-11-05 15:55:00.135025] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:27.937 [2024-11-05 15:55:00.136806] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:27.937 [2024-11-05 15:55:00.136838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:27.937 [2024-11-05 15:55:00.136907] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:27.937 [2024-11-05 15:55:00.136953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:27.937 pt1 00:25:27.937 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.937 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:25:27.937 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:27.937 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:27.937 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:27.937 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:27.937 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:27.937 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:27.937 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:27.937 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:27.937 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:27.937 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:27.937 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.937 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.937 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:27.937 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.937 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:27.937 "name": "raid_bdev1", 00:25:27.937 "uuid": "5ecfc4d1-6b59-483c-86da-8dde03c19ea4", 00:25:27.937 "strip_size_kb": 64, 00:25:27.937 "state": "configuring", 00:25:27.937 "raid_level": "concat", 00:25:27.937 "superblock": true, 00:25:27.937 "num_base_bdevs": 2, 00:25:27.937 "num_base_bdevs_discovered": 1, 00:25:27.937 "num_base_bdevs_operational": 2, 00:25:27.937 "base_bdevs_list": [ 00:25:27.937 { 00:25:27.937 "name": "pt1", 00:25:27.937 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:27.937 "is_configured": true, 00:25:27.937 "data_offset": 2048, 00:25:27.937 "data_size": 63488 00:25:27.937 }, 00:25:27.937 { 00:25:27.937 "name": null, 00:25:27.937 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:27.937 "is_configured": false, 00:25:27.937 "data_offset": 2048, 00:25:27.937 "data_size": 63488 00:25:27.937 } 00:25:27.937 ] 00:25:27.937 }' 00:25:27.937 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:27.937 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.197 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:25:28.197 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:25:28.197 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:28.197 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:28.197 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.197 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.197 [2024-11-05 15:55:00.447041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:28.197 [2024-11-05 15:55:00.447097] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:28.197 [2024-11-05 15:55:00.447112] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:28.197 [2024-11-05 15:55:00.447121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:28.197 [2024-11-05 15:55:00.447471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:28.197 [2024-11-05 15:55:00.447484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:28.197 [2024-11-05 15:55:00.447538] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:28.197 [2024-11-05 15:55:00.447556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:28.197 [2024-11-05 15:55:00.447639] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:28.197 [2024-11-05 15:55:00.447648] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:28.197 [2024-11-05 15:55:00.447831] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:28.197 [2024-11-05 15:55:00.447942] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:28.197 [2024-11-05 15:55:00.447950] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:25:28.197 [2024-11-05 15:55:00.448050] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:28.197 pt2 00:25:28.197 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.197 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:25:28.197 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:28.197 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:25:28.197 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:28.197 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:28.197 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:28.197 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:28.197 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:28.197 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:28.197 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:28.197 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:28.197 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:28.197 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:28.197 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:28.197 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.197 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.197 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.197 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:28.197 "name": "raid_bdev1", 00:25:28.197 "uuid": "5ecfc4d1-6b59-483c-86da-8dde03c19ea4", 00:25:28.197 "strip_size_kb": 64, 00:25:28.197 "state": "online", 00:25:28.197 "raid_level": "concat", 00:25:28.197 "superblock": true, 00:25:28.197 "num_base_bdevs": 2, 00:25:28.197 "num_base_bdevs_discovered": 2, 00:25:28.197 "num_base_bdevs_operational": 2, 00:25:28.197 "base_bdevs_list": [ 00:25:28.197 { 00:25:28.197 "name": "pt1", 00:25:28.197 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:28.197 "is_configured": true, 00:25:28.197 "data_offset": 2048, 00:25:28.197 "data_size": 63488 00:25:28.197 }, 00:25:28.197 { 00:25:28.197 "name": "pt2", 00:25:28.197 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:28.197 "is_configured": true, 00:25:28.197 "data_offset": 2048, 00:25:28.197 "data_size": 63488 00:25:28.197 } 00:25:28.197 ] 00:25:28.197 }' 00:25:28.197 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:28.197 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.455 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:25:28.455 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:28.455 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:28.455 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:28.455 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:28.455 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:28.455 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:28.455 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.455 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:28.455 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.455 [2024-11-05 15:55:00.767321] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:28.455 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.455 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:28.455 "name": "raid_bdev1", 00:25:28.455 "aliases": [ 00:25:28.455 "5ecfc4d1-6b59-483c-86da-8dde03c19ea4" 00:25:28.455 ], 00:25:28.455 "product_name": "Raid Volume", 00:25:28.455 "block_size": 512, 00:25:28.455 "num_blocks": 126976, 00:25:28.455 "uuid": "5ecfc4d1-6b59-483c-86da-8dde03c19ea4", 00:25:28.455 "assigned_rate_limits": { 00:25:28.455 "rw_ios_per_sec": 0, 00:25:28.455 "rw_mbytes_per_sec": 0, 00:25:28.455 "r_mbytes_per_sec": 0, 00:25:28.455 "w_mbytes_per_sec": 0 00:25:28.455 }, 00:25:28.455 "claimed": false, 00:25:28.455 "zoned": false, 00:25:28.455 "supported_io_types": { 00:25:28.455 "read": true, 00:25:28.455 "write": true, 00:25:28.455 "unmap": true, 00:25:28.455 "flush": true, 00:25:28.455 "reset": true, 00:25:28.455 "nvme_admin": false, 00:25:28.455 "nvme_io": false, 00:25:28.455 "nvme_io_md": false, 00:25:28.455 "write_zeroes": true, 00:25:28.455 "zcopy": false, 00:25:28.455 "get_zone_info": false, 00:25:28.455 "zone_management": false, 00:25:28.455 "zone_append": false, 00:25:28.455 "compare": false, 00:25:28.455 "compare_and_write": false, 00:25:28.455 "abort": false, 00:25:28.455 "seek_hole": false, 00:25:28.455 "seek_data": false, 00:25:28.456 "copy": false, 00:25:28.456 "nvme_iov_md": false 00:25:28.456 }, 00:25:28.456 "memory_domains": [ 00:25:28.456 { 00:25:28.456 "dma_device_id": "system", 00:25:28.456 "dma_device_type": 1 00:25:28.456 }, 00:25:28.456 { 00:25:28.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:28.456 "dma_device_type": 2 00:25:28.456 }, 00:25:28.456 { 00:25:28.456 "dma_device_id": "system", 00:25:28.456 "dma_device_type": 1 00:25:28.456 }, 00:25:28.456 { 00:25:28.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:28.456 "dma_device_type": 2 00:25:28.456 } 00:25:28.456 ], 00:25:28.456 "driver_specific": { 00:25:28.456 "raid": { 00:25:28.456 "uuid": "5ecfc4d1-6b59-483c-86da-8dde03c19ea4", 00:25:28.456 "strip_size_kb": 64, 00:25:28.456 "state": "online", 00:25:28.456 "raid_level": "concat", 00:25:28.456 "superblock": true, 00:25:28.456 "num_base_bdevs": 2, 00:25:28.456 "num_base_bdevs_discovered": 2, 00:25:28.456 "num_base_bdevs_operational": 2, 00:25:28.456 "base_bdevs_list": [ 00:25:28.456 { 00:25:28.456 "name": "pt1", 00:25:28.456 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:28.456 "is_configured": true, 00:25:28.456 "data_offset": 2048, 00:25:28.456 "data_size": 63488 00:25:28.456 }, 00:25:28.456 { 00:25:28.456 "name": "pt2", 00:25:28.456 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:28.456 "is_configured": true, 00:25:28.456 "data_offset": 2048, 00:25:28.456 "data_size": 63488 00:25:28.456 } 00:25:28.456 ] 00:25:28.456 } 00:25:28.456 } 00:25:28.456 }' 00:25:28.456 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:28.456 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:28.456 pt2' 00:25:28.456 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:28.456 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:28.456 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:28.456 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:28.456 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:28.456 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.456 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.456 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.714 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:28.714 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:28.714 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:28.714 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:28.714 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.714 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.714 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:28.714 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.714 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:28.714 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:28.714 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:25:28.714 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:28.714 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.714 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.714 [2024-11-05 15:55:00.923336] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:28.714 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.714 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5ecfc4d1-6b59-483c-86da-8dde03c19ea4 '!=' 5ecfc4d1-6b59-483c-86da-8dde03c19ea4 ']' 00:25:28.714 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:25:28.714 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:28.714 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:25:28.714 15:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 60723 00:25:28.714 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 60723 ']' 00:25:28.714 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 60723 00:25:28.714 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:25:28.714 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:28.714 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60723 00:25:28.714 killing process with pid 60723 00:25:28.714 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:28.714 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:28.714 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60723' 00:25:28.714 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 60723 00:25:28.714 [2024-11-05 15:55:00.975745] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:28.714 [2024-11-05 15:55:00.975816] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:28.714 15:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 60723 00:25:28.714 [2024-11-05 15:55:00.975864] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:28.714 [2024-11-05 15:55:00.975874] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:25:28.714 [2024-11-05 15:55:01.077795] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:29.280 15:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:25:29.280 00:25:29.280 real 0m3.108s 00:25:29.280 user 0m4.446s 00:25:29.280 sys 0m0.484s 00:25:29.280 15:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:29.280 15:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:29.280 ************************************ 00:25:29.280 END TEST raid_superblock_test 00:25:29.280 ************************************ 00:25:29.280 15:55:01 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:25:29.280 15:55:01 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:25:29.280 15:55:01 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:29.280 15:55:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:29.280 ************************************ 00:25:29.280 START TEST raid_read_error_test 00:25:29.280 ************************************ 00:25:29.280 15:55:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 read 00:25:29.280 15:55:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:25:29.280 15:55:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:25:29.280 15:55:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:25:29.280 15:55:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:25:29.280 15:55:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:29.280 15:55:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:25:29.280 15:55:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:29.280 15:55:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:29.280 15:55:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:25:29.280 15:55:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:29.280 15:55:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:29.280 15:55:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:29.280 15:55:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:25:29.280 15:55:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:25:29.280 15:55:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:25:29.281 15:55:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:25:29.281 15:55:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:25:29.281 15:55:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:25:29.281 15:55:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:25:29.281 15:55:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:25:29.281 15:55:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:25:29.281 15:55:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:25:29.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:29.281 15:55:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.h0uD9qrxcx 00:25:29.281 15:55:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=60920 00:25:29.281 15:55:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 60920 00:25:29.281 15:55:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 60920 ']' 00:25:29.281 15:55:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:29.281 15:55:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:29.281 15:55:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:29.281 15:55:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:29.281 15:55:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:29.281 15:55:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:25:29.538 [2024-11-05 15:55:01.754209] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:25:29.538 [2024-11-05 15:55:01.754329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60920 ] 00:25:29.538 [2024-11-05 15:55:01.915813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.796 [2024-11-05 15:55:02.013216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.796 [2024-11-05 15:55:02.149782] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:29.797 [2024-11-05 15:55:02.149817] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:30.361 15:55:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:30.361 15:55:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:25:30.361 15:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:30.361 15:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:30.361 15:55:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.361 15:55:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:30.361 BaseBdev1_malloc 00:25:30.361 15:55:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.361 15:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:25:30.361 15:55:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:30.362 true 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:30.362 [2024-11-05 15:55:02.630184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:25:30.362 [2024-11-05 15:55:02.630237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:30.362 [2024-11-05 15:55:02.630256] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:25:30.362 [2024-11-05 15:55:02.630267] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:30.362 [2024-11-05 15:55:02.632366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:30.362 [2024-11-05 15:55:02.632567] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:30.362 BaseBdev1 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:30.362 BaseBdev2_malloc 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:30.362 true 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:30.362 [2024-11-05 15:55:02.674252] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:25:30.362 [2024-11-05 15:55:02.674302] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:30.362 [2024-11-05 15:55:02.674317] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:25:30.362 [2024-11-05 15:55:02.674327] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:30.362 [2024-11-05 15:55:02.676444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:30.362 [2024-11-05 15:55:02.676479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:30.362 BaseBdev2 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:30.362 [2024-11-05 15:55:02.682309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:30.362 [2024-11-05 15:55:02.684159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:30.362 [2024-11-05 15:55:02.684338] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:30.362 [2024-11-05 15:55:02.684352] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:30.362 [2024-11-05 15:55:02.684582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:25:30.362 [2024-11-05 15:55:02.684726] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:30.362 [2024-11-05 15:55:02.684736] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:25:30.362 [2024-11-05 15:55:02.684892] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:30.362 "name": "raid_bdev1", 00:25:30.362 "uuid": "eb1b119d-d14a-47c4-a1c8-7f9f2166c092", 00:25:30.362 "strip_size_kb": 64, 00:25:30.362 "state": "online", 00:25:30.362 "raid_level": "concat", 00:25:30.362 "superblock": true, 00:25:30.362 "num_base_bdevs": 2, 00:25:30.362 "num_base_bdevs_discovered": 2, 00:25:30.362 "num_base_bdevs_operational": 2, 00:25:30.362 "base_bdevs_list": [ 00:25:30.362 { 00:25:30.362 "name": "BaseBdev1", 00:25:30.362 "uuid": "b0019195-4c1c-50c4-b312-9cb14cc9b0c3", 00:25:30.362 "is_configured": true, 00:25:30.362 "data_offset": 2048, 00:25:30.362 "data_size": 63488 00:25:30.362 }, 00:25:30.362 { 00:25:30.362 "name": "BaseBdev2", 00:25:30.362 "uuid": "573c7943-920d-50d3-9f8d-7ef3f25487e5", 00:25:30.362 "is_configured": true, 00:25:30.362 "data_offset": 2048, 00:25:30.362 "data_size": 63488 00:25:30.362 } 00:25:30.362 ] 00:25:30.362 }' 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:30.362 15:55:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:30.620 15:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:25:30.620 15:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:25:30.878 [2024-11-05 15:55:03.079312] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:25:31.814 15:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:25:31.814 15:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.814 15:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.814 15:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.814 15:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:25:31.814 15:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:25:31.814 15:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:25:31.814 15:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:25:31.814 15:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:31.814 15:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:31.814 15:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:31.814 15:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:31.814 15:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:31.814 15:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:31.814 15:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:31.814 15:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:31.814 15:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:31.814 15:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:31.814 15:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.814 15:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.814 15:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:31.814 15:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.814 15:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:31.814 "name": "raid_bdev1", 00:25:31.814 "uuid": "eb1b119d-d14a-47c4-a1c8-7f9f2166c092", 00:25:31.814 "strip_size_kb": 64, 00:25:31.814 "state": "online", 00:25:31.814 "raid_level": "concat", 00:25:31.814 "superblock": true, 00:25:31.814 "num_base_bdevs": 2, 00:25:31.814 "num_base_bdevs_discovered": 2, 00:25:31.814 "num_base_bdevs_operational": 2, 00:25:31.814 "base_bdevs_list": [ 00:25:31.814 { 00:25:31.814 "name": "BaseBdev1", 00:25:31.814 "uuid": "b0019195-4c1c-50c4-b312-9cb14cc9b0c3", 00:25:31.814 "is_configured": true, 00:25:31.814 "data_offset": 2048, 00:25:31.814 "data_size": 63488 00:25:31.814 }, 00:25:31.814 { 00:25:31.814 "name": "BaseBdev2", 00:25:31.814 "uuid": "573c7943-920d-50d3-9f8d-7ef3f25487e5", 00:25:31.814 "is_configured": true, 00:25:31.814 "data_offset": 2048, 00:25:31.814 "data_size": 63488 00:25:31.814 } 00:25:31.814 ] 00:25:31.814 }' 00:25:31.814 15:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:31.814 15:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:32.071 15:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:32.072 15:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.072 15:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:32.072 [2024-11-05 15:55:04.325212] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:32.072 [2024-11-05 15:55:04.325245] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:32.072 [2024-11-05 15:55:04.328428] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:32.072 [2024-11-05 15:55:04.328548] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:32.072 [2024-11-05 15:55:04.328602] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:32.072 [2024-11-05 15:55:04.328833] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:25:32.072 15:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.072 { 00:25:32.072 "results": [ 00:25:32.072 { 00:25:32.072 "job": "raid_bdev1", 00:25:32.072 "core_mask": "0x1", 00:25:32.072 "workload": "randrw", 00:25:32.072 "percentage": 50, 00:25:32.072 "status": "finished", 00:25:32.072 "queue_depth": 1, 00:25:32.072 "io_size": 131072, 00:25:32.072 "runtime": 1.243978, 00:25:32.072 "iops": 15267.954899523947, 00:25:32.072 "mibps": 1908.4943624404934, 00:25:32.072 "io_failed": 1, 00:25:32.072 "io_timeout": 0, 00:25:32.072 "avg_latency_us": 89.55151051749135, 00:25:32.072 "min_latency_us": 33.08307692307692, 00:25:32.072 "max_latency_us": 1751.8276923076924 00:25:32.072 } 00:25:32.072 ], 00:25:32.072 "core_count": 1 00:25:32.072 } 00:25:32.072 15:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 60920 00:25:32.072 15:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 60920 ']' 00:25:32.072 15:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 60920 00:25:32.072 15:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:25:32.072 15:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:32.072 15:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60920 00:25:32.072 15:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:32.072 killing process with pid 60920 00:25:32.072 15:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:32.072 15:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60920' 00:25:32.072 15:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 60920 00:25:32.072 [2024-11-05 15:55:04.361176] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:32.072 15:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 60920 00:25:32.072 [2024-11-05 15:55:04.442909] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:32.637 15:55:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.h0uD9qrxcx 00:25:32.637 15:55:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:25:32.637 15:55:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:25:32.637 15:55:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.80 00:25:32.637 15:55:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:25:32.637 15:55:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:32.637 15:55:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:25:32.637 15:55:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.80 != \0\.\0\0 ]] 00:25:32.638 00:25:32.638 real 0m3.355s 00:25:32.638 user 0m4.050s 00:25:32.638 sys 0m0.372s 00:25:32.638 15:55:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:32.638 ************************************ 00:25:32.638 END TEST raid_read_error_test 00:25:32.638 ************************************ 00:25:32.638 15:55:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:32.895 15:55:05 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:25:32.895 15:55:05 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:25:32.895 15:55:05 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:32.895 15:55:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:32.895 ************************************ 00:25:32.895 START TEST raid_write_error_test 00:25:32.895 ************************************ 00:25:32.895 15:55:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 write 00:25:32.895 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:25:32.895 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:25:32.895 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:25:32.895 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:25:32.895 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:32.895 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:25:32.895 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:32.895 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:32.895 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:25:32.895 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:32.895 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:32.895 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:32.895 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:25:32.895 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:25:32.896 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:25:32.896 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:25:32.896 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:25:32.896 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:25:32.896 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:25:32.896 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:25:32.896 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:25:32.896 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:25:32.896 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Fue31USDMy 00:25:32.896 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61057 00:25:32.896 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61057 00:25:32.896 15:55:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 61057 ']' 00:25:32.896 15:55:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:32.896 15:55:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:32.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:32.896 15:55:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:32.896 15:55:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:32.896 15:55:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:32.896 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:25:32.896 [2024-11-05 15:55:05.139084] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:25:32.896 [2024-11-05 15:55:05.139277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61057 ] 00:25:32.896 [2024-11-05 15:55:05.288740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.153 [2024-11-05 15:55:05.369895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.153 [2024-11-05 15:55:05.478649] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:33.153 [2024-11-05 15:55:05.478681] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:33.719 15:55:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:33.719 15:55:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:25:33.719 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:33.719 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:33.719 15:55:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.719 15:55:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.719 BaseBdev1_malloc 00:25:33.719 15:55:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.719 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:25:33.719 15:55:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.719 15:55:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.719 true 00:25:33.719 15:55:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.719 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:25:33.719 15:55:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.719 15:55:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.719 [2024-11-05 15:55:05.943738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:25:33.719 [2024-11-05 15:55:05.943906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:33.719 [2024-11-05 15:55:05.943927] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:25:33.719 [2024-11-05 15:55:05.943937] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:33.719 [2024-11-05 15:55:05.945647] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:33.719 [2024-11-05 15:55:05.945679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:33.719 BaseBdev1 00:25:33.719 15:55:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.719 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:33.719 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:33.719 15:55:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.719 15:55:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.719 BaseBdev2_malloc 00:25:33.719 15:55:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.719 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:25:33.719 15:55:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.719 15:55:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.719 true 00:25:33.719 15:55:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.719 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:25:33.719 15:55:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.719 15:55:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.719 [2024-11-05 15:55:05.983073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:25:33.719 [2024-11-05 15:55:05.983209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:33.719 [2024-11-05 15:55:05.983227] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:25:33.719 [2024-11-05 15:55:05.983236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:33.719 [2024-11-05 15:55:05.984951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:33.719 [2024-11-05 15:55:05.984981] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:33.719 BaseBdev2 00:25:33.719 15:55:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.719 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:25:33.719 15:55:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.719 15:55:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.719 [2024-11-05 15:55:05.991121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:33.719 [2024-11-05 15:55:05.992619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:33.719 [2024-11-05 15:55:05.992770] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:33.719 [2024-11-05 15:55:05.992781] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:33.719 [2024-11-05 15:55:05.992980] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:25:33.719 [2024-11-05 15:55:05.993098] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:33.719 [2024-11-05 15:55:05.993107] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:25:33.719 [2024-11-05 15:55:05.993215] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:33.719 15:55:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.719 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:25:33.719 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:33.720 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:33.720 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:33.720 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:33.720 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:33.720 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:33.720 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:33.720 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:33.720 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:33.720 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:33.720 15:55:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:33.720 15:55:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.720 15:55:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.720 15:55:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.720 15:55:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:33.720 "name": "raid_bdev1", 00:25:33.720 "uuid": "cdf90034-11d6-4634-8501-95979e51f222", 00:25:33.720 "strip_size_kb": 64, 00:25:33.720 "state": "online", 00:25:33.720 "raid_level": "concat", 00:25:33.720 "superblock": true, 00:25:33.720 "num_base_bdevs": 2, 00:25:33.720 "num_base_bdevs_discovered": 2, 00:25:33.720 "num_base_bdevs_operational": 2, 00:25:33.720 "base_bdevs_list": [ 00:25:33.720 { 00:25:33.720 "name": "BaseBdev1", 00:25:33.720 "uuid": "8ae0bb56-41b5-5324-aca3-e05ebc44bccf", 00:25:33.720 "is_configured": true, 00:25:33.720 "data_offset": 2048, 00:25:33.720 "data_size": 63488 00:25:33.720 }, 00:25:33.720 { 00:25:33.720 "name": "BaseBdev2", 00:25:33.720 "uuid": "fa89094b-0631-501c-9643-16b9e648452a", 00:25:33.720 "is_configured": true, 00:25:33.720 "data_offset": 2048, 00:25:33.720 "data_size": 63488 00:25:33.720 } 00:25:33.720 ] 00:25:33.720 }' 00:25:33.720 15:55:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:33.720 15:55:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.978 15:55:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:25:33.978 15:55:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:25:33.978 [2024-11-05 15:55:06.383953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:25:34.913 15:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:25:34.913 15:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.913 15:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:34.913 15:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.913 15:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:25:34.913 15:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:25:34.913 15:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:25:34.913 15:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:25:34.914 15:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:34.914 15:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:34.914 15:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:34.914 15:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:34.914 15:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:34.914 15:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:34.914 15:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:34.914 15:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:34.914 15:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:34.914 15:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:34.914 15:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:34.914 15:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.914 15:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:34.914 15:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.173 15:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:35.173 "name": "raid_bdev1", 00:25:35.173 "uuid": "cdf90034-11d6-4634-8501-95979e51f222", 00:25:35.173 "strip_size_kb": 64, 00:25:35.173 "state": "online", 00:25:35.173 "raid_level": "concat", 00:25:35.173 "superblock": true, 00:25:35.173 "num_base_bdevs": 2, 00:25:35.173 "num_base_bdevs_discovered": 2, 00:25:35.173 "num_base_bdevs_operational": 2, 00:25:35.173 "base_bdevs_list": [ 00:25:35.173 { 00:25:35.173 "name": "BaseBdev1", 00:25:35.173 "uuid": "8ae0bb56-41b5-5324-aca3-e05ebc44bccf", 00:25:35.173 "is_configured": true, 00:25:35.173 "data_offset": 2048, 00:25:35.173 "data_size": 63488 00:25:35.173 }, 00:25:35.173 { 00:25:35.173 "name": "BaseBdev2", 00:25:35.173 "uuid": "fa89094b-0631-501c-9643-16b9e648452a", 00:25:35.173 "is_configured": true, 00:25:35.173 "data_offset": 2048, 00:25:35.173 "data_size": 63488 00:25:35.173 } 00:25:35.173 ] 00:25:35.173 }' 00:25:35.173 15:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:35.173 15:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.173 15:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:35.173 15:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.173 15:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.432 [2024-11-05 15:55:07.590830] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:35.432 [2024-11-05 15:55:07.590870] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:35.432 [2024-11-05 15:55:07.593364] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:35.432 [2024-11-05 15:55:07.593402] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:35.432 [2024-11-05 15:55:07.593429] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:35.432 [2024-11-05 15:55:07.593440] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:25:35.432 { 00:25:35.432 "results": [ 00:25:35.432 { 00:25:35.432 "job": "raid_bdev1", 00:25:35.432 "core_mask": "0x1", 00:25:35.432 "workload": "randrw", 00:25:35.432 "percentage": 50, 00:25:35.432 "status": "finished", 00:25:35.432 "queue_depth": 1, 00:25:35.432 "io_size": 131072, 00:25:35.432 "runtime": 1.205387, 00:25:35.432 "iops": 18799.771359737577, 00:25:35.432 "mibps": 2349.971419967197, 00:25:35.432 "io_failed": 1, 00:25:35.432 "io_timeout": 0, 00:25:35.432 "avg_latency_us": 72.8221616667685, 00:25:35.432 "min_latency_us": 25.403076923076924, 00:25:35.432 "max_latency_us": 1367.4338461538462 00:25:35.432 } 00:25:35.432 ], 00:25:35.432 "core_count": 1 00:25:35.432 } 00:25:35.432 15:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.432 15:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61057 00:25:35.432 15:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 61057 ']' 00:25:35.432 15:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 61057 00:25:35.432 15:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:25:35.432 15:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:35.432 15:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61057 00:25:35.432 killing process with pid 61057 00:25:35.432 15:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:35.432 15:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:35.432 15:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61057' 00:25:35.432 15:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 61057 00:25:35.432 [2024-11-05 15:55:07.621727] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:35.432 15:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 61057 00:25:35.432 [2024-11-05 15:55:07.688178] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:36.000 15:55:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Fue31USDMy 00:25:36.000 15:55:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:25:36.000 15:55:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:25:36.000 15:55:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.83 00:25:36.000 15:55:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:25:36.000 15:55:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:36.000 15:55:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:25:36.000 15:55:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.83 != \0\.\0\0 ]] 00:25:36.000 00:25:36.000 real 0m3.213s 00:25:36.000 user 0m3.812s 00:25:36.000 sys 0m0.350s 00:25:36.000 15:55:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:36.000 15:55:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.000 ************************************ 00:25:36.000 END TEST raid_write_error_test 00:25:36.000 ************************************ 00:25:36.000 15:55:08 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:25:36.000 15:55:08 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:25:36.000 15:55:08 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:25:36.000 15:55:08 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:36.000 15:55:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:36.000 ************************************ 00:25:36.000 START TEST raid_state_function_test 00:25:36.000 ************************************ 00:25:36.000 15:55:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 false 00:25:36.000 15:55:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:25:36.000 15:55:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:25:36.000 15:55:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:25:36.000 15:55:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:25:36.000 15:55:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:25:36.000 15:55:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:36.000 15:55:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:25:36.000 15:55:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:36.000 15:55:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:36.000 15:55:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:25:36.000 15:55:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:36.000 15:55:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:36.001 Process raid pid: 61189 00:25:36.001 15:55:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:36.001 15:55:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:25:36.001 15:55:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:25:36.001 15:55:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:25:36.001 15:55:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:25:36.001 15:55:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:25:36.001 15:55:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:25:36.001 15:55:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:25:36.001 15:55:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:25:36.001 15:55:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:25:36.001 15:55:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61189 00:25:36.001 15:55:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61189' 00:25:36.001 15:55:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61189 00:25:36.001 15:55:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 61189 ']' 00:25:36.001 15:55:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:36.001 15:55:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:36.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:36.001 15:55:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:36.001 15:55:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:36.001 15:55:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.001 15:55:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:36.001 [2024-11-05 15:55:08.384668] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:25:36.001 [2024-11-05 15:55:08.384765] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:36.260 [2024-11-05 15:55:08.540535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.260 [2024-11-05 15:55:08.641459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.519 [2024-11-05 15:55:08.778664] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:36.519 [2024-11-05 15:55:08.778698] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:37.085 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:37.085 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:25:37.085 15:55:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:37.085 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.085 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.085 [2024-11-05 15:55:09.316679] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:37.085 [2024-11-05 15:55:09.316729] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:37.085 [2024-11-05 15:55:09.316740] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:37.085 [2024-11-05 15:55:09.316750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:37.085 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.085 15:55:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:37.085 15:55:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:37.085 15:55:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:37.085 15:55:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:37.085 15:55:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:37.085 15:55:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:37.085 15:55:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:37.085 15:55:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:37.085 15:55:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:37.085 15:55:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:37.085 15:55:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:37.085 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.085 15:55:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:37.085 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.085 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.085 15:55:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:37.085 "name": "Existed_Raid", 00:25:37.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.085 "strip_size_kb": 0, 00:25:37.085 "state": "configuring", 00:25:37.085 "raid_level": "raid1", 00:25:37.085 "superblock": false, 00:25:37.085 "num_base_bdevs": 2, 00:25:37.085 "num_base_bdevs_discovered": 0, 00:25:37.085 "num_base_bdevs_operational": 2, 00:25:37.086 "base_bdevs_list": [ 00:25:37.086 { 00:25:37.086 "name": "BaseBdev1", 00:25:37.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.086 "is_configured": false, 00:25:37.086 "data_offset": 0, 00:25:37.086 "data_size": 0 00:25:37.086 }, 00:25:37.086 { 00:25:37.086 "name": "BaseBdev2", 00:25:37.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.086 "is_configured": false, 00:25:37.086 "data_offset": 0, 00:25:37.086 "data_size": 0 00:25:37.086 } 00:25:37.086 ] 00:25:37.086 }' 00:25:37.086 15:55:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:37.086 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.371 15:55:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:37.371 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.371 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.371 [2024-11-05 15:55:09.608718] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:37.371 [2024-11-05 15:55:09.608752] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:37.371 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.371 15:55:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:37.371 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.371 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.371 [2024-11-05 15:55:09.616705] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:37.371 [2024-11-05 15:55:09.616744] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:37.371 [2024-11-05 15:55:09.616752] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:37.371 [2024-11-05 15:55:09.616764] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:37.371 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.371 15:55:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:37.371 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.371 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.372 [2024-11-05 15:55:09.649006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:37.372 BaseBdev1 00:25:37.372 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.372 15:55:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:25:37.372 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:25:37.372 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:37.372 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:25:37.372 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:37.372 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:37.372 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:25:37.372 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.372 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.372 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.372 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:37.372 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.372 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.372 [ 00:25:37.372 { 00:25:37.372 "name": "BaseBdev1", 00:25:37.372 "aliases": [ 00:25:37.372 "a3ce0eda-80d0-49df-8fa4-12fc15b8da77" 00:25:37.372 ], 00:25:37.372 "product_name": "Malloc disk", 00:25:37.372 "block_size": 512, 00:25:37.372 "num_blocks": 65536, 00:25:37.372 "uuid": "a3ce0eda-80d0-49df-8fa4-12fc15b8da77", 00:25:37.372 "assigned_rate_limits": { 00:25:37.372 "rw_ios_per_sec": 0, 00:25:37.372 "rw_mbytes_per_sec": 0, 00:25:37.372 "r_mbytes_per_sec": 0, 00:25:37.372 "w_mbytes_per_sec": 0 00:25:37.372 }, 00:25:37.372 "claimed": true, 00:25:37.372 "claim_type": "exclusive_write", 00:25:37.372 "zoned": false, 00:25:37.372 "supported_io_types": { 00:25:37.372 "read": true, 00:25:37.372 "write": true, 00:25:37.372 "unmap": true, 00:25:37.372 "flush": true, 00:25:37.372 "reset": true, 00:25:37.372 "nvme_admin": false, 00:25:37.372 "nvme_io": false, 00:25:37.372 "nvme_io_md": false, 00:25:37.372 "write_zeroes": true, 00:25:37.372 "zcopy": true, 00:25:37.372 "get_zone_info": false, 00:25:37.372 "zone_management": false, 00:25:37.372 "zone_append": false, 00:25:37.372 "compare": false, 00:25:37.372 "compare_and_write": false, 00:25:37.372 "abort": true, 00:25:37.372 "seek_hole": false, 00:25:37.372 "seek_data": false, 00:25:37.372 "copy": true, 00:25:37.372 "nvme_iov_md": false 00:25:37.372 }, 00:25:37.372 "memory_domains": [ 00:25:37.372 { 00:25:37.372 "dma_device_id": "system", 00:25:37.372 "dma_device_type": 1 00:25:37.372 }, 00:25:37.372 { 00:25:37.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:37.372 "dma_device_type": 2 00:25:37.372 } 00:25:37.372 ], 00:25:37.372 "driver_specific": {} 00:25:37.372 } 00:25:37.372 ] 00:25:37.372 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.372 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:25:37.372 15:55:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:37.372 15:55:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:37.372 15:55:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:37.372 15:55:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:37.372 15:55:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:37.372 15:55:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:37.372 15:55:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:37.372 15:55:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:37.372 15:55:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:37.372 15:55:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:37.372 15:55:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:37.372 15:55:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:37.372 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.372 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.372 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.372 15:55:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:37.372 "name": "Existed_Raid", 00:25:37.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.372 "strip_size_kb": 0, 00:25:37.372 "state": "configuring", 00:25:37.372 "raid_level": "raid1", 00:25:37.372 "superblock": false, 00:25:37.372 "num_base_bdevs": 2, 00:25:37.372 "num_base_bdevs_discovered": 1, 00:25:37.372 "num_base_bdevs_operational": 2, 00:25:37.372 "base_bdevs_list": [ 00:25:37.372 { 00:25:37.372 "name": "BaseBdev1", 00:25:37.372 "uuid": "a3ce0eda-80d0-49df-8fa4-12fc15b8da77", 00:25:37.372 "is_configured": true, 00:25:37.372 "data_offset": 0, 00:25:37.372 "data_size": 65536 00:25:37.372 }, 00:25:37.372 { 00:25:37.372 "name": "BaseBdev2", 00:25:37.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.372 "is_configured": false, 00:25:37.372 "data_offset": 0, 00:25:37.372 "data_size": 0 00:25:37.372 } 00:25:37.372 ] 00:25:37.372 }' 00:25:37.372 15:55:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:37.372 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.631 15:55:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:37.631 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.631 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.631 [2024-11-05 15:55:09.989122] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:37.631 [2024-11-05 15:55:09.989326] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:25:37.631 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.631 15:55:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:37.631 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.631 15:55:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.631 [2024-11-05 15:55:09.997170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:37.631 [2024-11-05 15:55:09.999022] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:37.631 [2024-11-05 15:55:09.999154] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:37.631 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.631 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:25:37.631 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:37.631 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:37.631 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:37.631 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:37.631 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:37.631 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:37.631 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:37.631 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:37.631 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:37.631 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:37.631 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:37.631 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:37.631 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:37.631 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.631 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.631 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.631 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:37.631 "name": "Existed_Raid", 00:25:37.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.631 "strip_size_kb": 0, 00:25:37.631 "state": "configuring", 00:25:37.631 "raid_level": "raid1", 00:25:37.631 "superblock": false, 00:25:37.631 "num_base_bdevs": 2, 00:25:37.631 "num_base_bdevs_discovered": 1, 00:25:37.631 "num_base_bdevs_operational": 2, 00:25:37.631 "base_bdevs_list": [ 00:25:37.631 { 00:25:37.631 "name": "BaseBdev1", 00:25:37.631 "uuid": "a3ce0eda-80d0-49df-8fa4-12fc15b8da77", 00:25:37.631 "is_configured": true, 00:25:37.631 "data_offset": 0, 00:25:37.631 "data_size": 65536 00:25:37.631 }, 00:25:37.631 { 00:25:37.631 "name": "BaseBdev2", 00:25:37.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.631 "is_configured": false, 00:25:37.631 "data_offset": 0, 00:25:37.631 "data_size": 0 00:25:37.631 } 00:25:37.631 ] 00:25:37.631 }' 00:25:37.631 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:37.631 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.889 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:37.889 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.889 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.147 [2024-11-05 15:55:10.311593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:38.147 [2024-11-05 15:55:10.311634] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:38.147 [2024-11-05 15:55:10.311641] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:25:38.147 [2024-11-05 15:55:10.311928] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:38.147 [2024-11-05 15:55:10.312067] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:38.147 [2024-11-05 15:55:10.312084] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:25:38.147 [2024-11-05 15:55:10.312330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:38.147 BaseBdev2 00:25:38.147 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.147 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:25:38.147 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:25:38.147 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:38.147 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:25:38.147 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:38.147 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:38.147 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:25:38.147 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.147 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.147 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.147 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:38.147 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.147 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.147 [ 00:25:38.147 { 00:25:38.147 "name": "BaseBdev2", 00:25:38.147 "aliases": [ 00:25:38.147 "ff40eea1-72cc-49a6-be38-62645ac5552f" 00:25:38.147 ], 00:25:38.147 "product_name": "Malloc disk", 00:25:38.147 "block_size": 512, 00:25:38.147 "num_blocks": 65536, 00:25:38.147 "uuid": "ff40eea1-72cc-49a6-be38-62645ac5552f", 00:25:38.147 "assigned_rate_limits": { 00:25:38.147 "rw_ios_per_sec": 0, 00:25:38.147 "rw_mbytes_per_sec": 0, 00:25:38.147 "r_mbytes_per_sec": 0, 00:25:38.147 "w_mbytes_per_sec": 0 00:25:38.147 }, 00:25:38.147 "claimed": true, 00:25:38.147 "claim_type": "exclusive_write", 00:25:38.147 "zoned": false, 00:25:38.147 "supported_io_types": { 00:25:38.147 "read": true, 00:25:38.147 "write": true, 00:25:38.147 "unmap": true, 00:25:38.147 "flush": true, 00:25:38.147 "reset": true, 00:25:38.147 "nvme_admin": false, 00:25:38.147 "nvme_io": false, 00:25:38.147 "nvme_io_md": false, 00:25:38.147 "write_zeroes": true, 00:25:38.147 "zcopy": true, 00:25:38.147 "get_zone_info": false, 00:25:38.147 "zone_management": false, 00:25:38.147 "zone_append": false, 00:25:38.147 "compare": false, 00:25:38.147 "compare_and_write": false, 00:25:38.147 "abort": true, 00:25:38.147 "seek_hole": false, 00:25:38.147 "seek_data": false, 00:25:38.147 "copy": true, 00:25:38.147 "nvme_iov_md": false 00:25:38.147 }, 00:25:38.147 "memory_domains": [ 00:25:38.147 { 00:25:38.147 "dma_device_id": "system", 00:25:38.147 "dma_device_type": 1 00:25:38.147 }, 00:25:38.147 { 00:25:38.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:38.147 "dma_device_type": 2 00:25:38.147 } 00:25:38.147 ], 00:25:38.147 "driver_specific": {} 00:25:38.147 } 00:25:38.147 ] 00:25:38.147 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.147 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:25:38.147 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:38.147 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:38.147 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:25:38.147 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:38.147 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:38.147 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:38.147 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:38.147 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:38.147 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:38.147 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:38.147 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:38.147 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:38.147 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:38.147 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:38.147 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.147 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.147 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.147 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:38.147 "name": "Existed_Raid", 00:25:38.147 "uuid": "a8d136ac-f313-413f-86a5-77865c20c351", 00:25:38.147 "strip_size_kb": 0, 00:25:38.147 "state": "online", 00:25:38.147 "raid_level": "raid1", 00:25:38.147 "superblock": false, 00:25:38.147 "num_base_bdevs": 2, 00:25:38.147 "num_base_bdevs_discovered": 2, 00:25:38.147 "num_base_bdevs_operational": 2, 00:25:38.147 "base_bdevs_list": [ 00:25:38.147 { 00:25:38.147 "name": "BaseBdev1", 00:25:38.147 "uuid": "a3ce0eda-80d0-49df-8fa4-12fc15b8da77", 00:25:38.147 "is_configured": true, 00:25:38.147 "data_offset": 0, 00:25:38.147 "data_size": 65536 00:25:38.147 }, 00:25:38.147 { 00:25:38.147 "name": "BaseBdev2", 00:25:38.147 "uuid": "ff40eea1-72cc-49a6-be38-62645ac5552f", 00:25:38.147 "is_configured": true, 00:25:38.147 "data_offset": 0, 00:25:38.147 "data_size": 65536 00:25:38.147 } 00:25:38.147 ] 00:25:38.147 }' 00:25:38.147 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:38.147 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.407 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:25:38.407 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:38.407 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:38.407 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:38.407 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:38.407 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:38.407 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:38.407 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:38.407 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.407 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.407 [2024-11-05 15:55:10.640029] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:38.407 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.407 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:38.407 "name": "Existed_Raid", 00:25:38.407 "aliases": [ 00:25:38.407 "a8d136ac-f313-413f-86a5-77865c20c351" 00:25:38.407 ], 00:25:38.407 "product_name": "Raid Volume", 00:25:38.407 "block_size": 512, 00:25:38.407 "num_blocks": 65536, 00:25:38.407 "uuid": "a8d136ac-f313-413f-86a5-77865c20c351", 00:25:38.407 "assigned_rate_limits": { 00:25:38.407 "rw_ios_per_sec": 0, 00:25:38.407 "rw_mbytes_per_sec": 0, 00:25:38.407 "r_mbytes_per_sec": 0, 00:25:38.407 "w_mbytes_per_sec": 0 00:25:38.407 }, 00:25:38.407 "claimed": false, 00:25:38.407 "zoned": false, 00:25:38.407 "supported_io_types": { 00:25:38.407 "read": true, 00:25:38.407 "write": true, 00:25:38.407 "unmap": false, 00:25:38.407 "flush": false, 00:25:38.407 "reset": true, 00:25:38.407 "nvme_admin": false, 00:25:38.407 "nvme_io": false, 00:25:38.407 "nvme_io_md": false, 00:25:38.407 "write_zeroes": true, 00:25:38.407 "zcopy": false, 00:25:38.407 "get_zone_info": false, 00:25:38.407 "zone_management": false, 00:25:38.407 "zone_append": false, 00:25:38.407 "compare": false, 00:25:38.407 "compare_and_write": false, 00:25:38.407 "abort": false, 00:25:38.407 "seek_hole": false, 00:25:38.407 "seek_data": false, 00:25:38.407 "copy": false, 00:25:38.407 "nvme_iov_md": false 00:25:38.407 }, 00:25:38.407 "memory_domains": [ 00:25:38.407 { 00:25:38.407 "dma_device_id": "system", 00:25:38.407 "dma_device_type": 1 00:25:38.407 }, 00:25:38.407 { 00:25:38.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:38.407 "dma_device_type": 2 00:25:38.407 }, 00:25:38.407 { 00:25:38.407 "dma_device_id": "system", 00:25:38.407 "dma_device_type": 1 00:25:38.407 }, 00:25:38.407 { 00:25:38.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:38.407 "dma_device_type": 2 00:25:38.407 } 00:25:38.407 ], 00:25:38.407 "driver_specific": { 00:25:38.407 "raid": { 00:25:38.407 "uuid": "a8d136ac-f313-413f-86a5-77865c20c351", 00:25:38.407 "strip_size_kb": 0, 00:25:38.407 "state": "online", 00:25:38.407 "raid_level": "raid1", 00:25:38.407 "superblock": false, 00:25:38.407 "num_base_bdevs": 2, 00:25:38.407 "num_base_bdevs_discovered": 2, 00:25:38.407 "num_base_bdevs_operational": 2, 00:25:38.407 "base_bdevs_list": [ 00:25:38.407 { 00:25:38.407 "name": "BaseBdev1", 00:25:38.407 "uuid": "a3ce0eda-80d0-49df-8fa4-12fc15b8da77", 00:25:38.407 "is_configured": true, 00:25:38.407 "data_offset": 0, 00:25:38.407 "data_size": 65536 00:25:38.407 }, 00:25:38.407 { 00:25:38.407 "name": "BaseBdev2", 00:25:38.407 "uuid": "ff40eea1-72cc-49a6-be38-62645ac5552f", 00:25:38.407 "is_configured": true, 00:25:38.407 "data_offset": 0, 00:25:38.407 "data_size": 65536 00:25:38.407 } 00:25:38.407 ] 00:25:38.407 } 00:25:38.407 } 00:25:38.407 }' 00:25:38.407 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:38.407 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:25:38.407 BaseBdev2' 00:25:38.407 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:38.407 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:38.407 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:38.407 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:38.407 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:25:38.407 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.407 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.407 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.407 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:38.407 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:38.407 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:38.407 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:38.407 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:38.407 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.407 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.407 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.407 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:38.407 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:38.407 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:38.407 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.407 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.407 [2024-11-05 15:55:10.787777] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:38.666 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.667 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:25:38.667 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:25:38.667 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:38.667 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:25:38.667 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:25:38.667 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:25:38.667 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:38.667 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:38.667 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:38.667 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:38.667 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:38.667 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:38.667 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:38.667 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:38.667 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:38.667 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:38.667 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:38.667 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.667 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.667 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.667 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:38.667 "name": "Existed_Raid", 00:25:38.667 "uuid": "a8d136ac-f313-413f-86a5-77865c20c351", 00:25:38.667 "strip_size_kb": 0, 00:25:38.667 "state": "online", 00:25:38.667 "raid_level": "raid1", 00:25:38.667 "superblock": false, 00:25:38.667 "num_base_bdevs": 2, 00:25:38.667 "num_base_bdevs_discovered": 1, 00:25:38.667 "num_base_bdevs_operational": 1, 00:25:38.667 "base_bdevs_list": [ 00:25:38.667 { 00:25:38.667 "name": null, 00:25:38.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:38.667 "is_configured": false, 00:25:38.667 "data_offset": 0, 00:25:38.667 "data_size": 65536 00:25:38.667 }, 00:25:38.667 { 00:25:38.667 "name": "BaseBdev2", 00:25:38.667 "uuid": "ff40eea1-72cc-49a6-be38-62645ac5552f", 00:25:38.667 "is_configured": true, 00:25:38.667 "data_offset": 0, 00:25:38.667 "data_size": 65536 00:25:38.667 } 00:25:38.667 ] 00:25:38.667 }' 00:25:38.667 15:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:38.667 15:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.925 15:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:25:38.925 15:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:38.925 15:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:38.925 15:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:38.925 15:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.925 15:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.925 15:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.925 15:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:38.925 15:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:38.925 15:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:25:38.925 15:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.925 15:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.925 [2024-11-05 15:55:11.154202] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:38.925 [2024-11-05 15:55:11.154287] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:38.925 [2024-11-05 15:55:11.212402] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:38.925 [2024-11-05 15:55:11.212445] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:38.925 [2024-11-05 15:55:11.212456] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:25:38.925 15:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.925 15:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:38.925 15:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:38.925 15:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:25:38.925 15:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:38.925 15:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.925 15:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.925 15:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.925 15:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:25:38.925 15:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:25:38.925 15:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:25:38.925 15:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61189 00:25:38.925 15:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 61189 ']' 00:25:38.925 15:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 61189 00:25:38.925 15:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:25:38.925 15:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:38.925 15:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61189 00:25:38.925 killing process with pid 61189 00:25:38.925 15:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:38.925 15:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:38.925 15:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61189' 00:25:38.925 15:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 61189 00:25:38.925 [2024-11-05 15:55:11.270448] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:38.925 15:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 61189 00:25:38.925 [2024-11-05 15:55:11.280716] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:39.494 15:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:25:39.494 00:25:39.494 real 0m3.524s 00:25:39.494 user 0m5.150s 00:25:39.494 sys 0m0.532s 00:25:39.494 ************************************ 00:25:39.494 END TEST raid_state_function_test 00:25:39.494 ************************************ 00:25:39.494 15:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:39.494 15:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:39.494 15:55:11 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:25:39.494 15:55:11 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:25:39.494 15:55:11 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:39.494 15:55:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:39.494 ************************************ 00:25:39.494 START TEST raid_state_function_test_sb 00:25:39.494 ************************************ 00:25:39.494 15:55:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:25:39.494 15:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:25:39.494 15:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:25:39.494 15:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:25:39.494 15:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:25:39.494 15:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:25:39.494 15:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:39.494 15:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:25:39.494 15:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:39.495 15:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:39.495 15:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:25:39.495 15:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:39.495 15:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:39.495 15:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:39.495 15:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:25:39.495 15:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:25:39.495 15:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:25:39.495 15:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:25:39.495 Process raid pid: 61420 00:25:39.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:39.495 15:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:25:39.495 15:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:25:39.495 15:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:25:39.495 15:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:25:39.495 15:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:25:39.495 15:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61420 00:25:39.495 15:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61420' 00:25:39.495 15:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61420 00:25:39.495 15:55:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 61420 ']' 00:25:39.495 15:55:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:39.495 15:55:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:39.495 15:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:39.495 15:55:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:39.495 15:55:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:39.495 15:55:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:39.755 [2024-11-05 15:55:11.948093] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:25:39.755 [2024-11-05 15:55:11.948186] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:39.755 [2024-11-05 15:55:12.098105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.014 [2024-11-05 15:55:12.182555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.014 [2024-11-05 15:55:12.292817] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:40.014 [2024-11-05 15:55:12.292855] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:40.581 15:55:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:40.581 15:55:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:25:40.581 15:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:40.581 15:55:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.581 15:55:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:40.581 [2024-11-05 15:55:12.724075] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:40.581 [2024-11-05 15:55:12.724213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:40.581 [2024-11-05 15:55:12.724227] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:40.581 [2024-11-05 15:55:12.724236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:40.581 15:55:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.582 15:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:40.582 15:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:40.582 15:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:40.582 15:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:40.582 15:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:40.582 15:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:40.582 15:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:40.582 15:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:40.582 15:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:40.582 15:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:40.582 15:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:40.582 15:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:40.582 15:55:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.582 15:55:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:40.582 15:55:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.582 15:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:40.582 "name": "Existed_Raid", 00:25:40.582 "uuid": "b1ed6208-0165-4e50-8912-4ffde93d9ccd", 00:25:40.582 "strip_size_kb": 0, 00:25:40.582 "state": "configuring", 00:25:40.582 "raid_level": "raid1", 00:25:40.582 "superblock": true, 00:25:40.582 "num_base_bdevs": 2, 00:25:40.582 "num_base_bdevs_discovered": 0, 00:25:40.582 "num_base_bdevs_operational": 2, 00:25:40.582 "base_bdevs_list": [ 00:25:40.582 { 00:25:40.582 "name": "BaseBdev1", 00:25:40.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.582 "is_configured": false, 00:25:40.582 "data_offset": 0, 00:25:40.582 "data_size": 0 00:25:40.582 }, 00:25:40.582 { 00:25:40.582 "name": "BaseBdev2", 00:25:40.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.582 "is_configured": false, 00:25:40.582 "data_offset": 0, 00:25:40.582 "data_size": 0 00:25:40.582 } 00:25:40.582 ] 00:25:40.582 }' 00:25:40.582 15:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:40.582 15:55:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:40.841 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:40.841 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.841 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:40.841 [2024-11-05 15:55:13.032092] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:40.841 [2024-11-05 15:55:13.032120] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:40.841 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.841 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:40.841 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.841 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:40.841 [2024-11-05 15:55:13.040086] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:40.841 [2024-11-05 15:55:13.040121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:40.841 [2024-11-05 15:55:13.040127] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:40.841 [2024-11-05 15:55:13.040137] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:40.841 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.841 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:40.841 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.841 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:40.841 [2024-11-05 15:55:13.067812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:40.841 BaseBdev1 00:25:40.841 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.841 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:25:40.841 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:25:40.841 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:40.841 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:25:40.841 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:40.841 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:40.841 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:25:40.841 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.841 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:40.841 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.841 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:40.841 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.841 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:40.841 [ 00:25:40.841 { 00:25:40.841 "name": "BaseBdev1", 00:25:40.841 "aliases": [ 00:25:40.841 "3cc20143-7ee2-448f-a34a-bbc8de9826c3" 00:25:40.841 ], 00:25:40.841 "product_name": "Malloc disk", 00:25:40.841 "block_size": 512, 00:25:40.841 "num_blocks": 65536, 00:25:40.841 "uuid": "3cc20143-7ee2-448f-a34a-bbc8de9826c3", 00:25:40.841 "assigned_rate_limits": { 00:25:40.841 "rw_ios_per_sec": 0, 00:25:40.841 "rw_mbytes_per_sec": 0, 00:25:40.841 "r_mbytes_per_sec": 0, 00:25:40.841 "w_mbytes_per_sec": 0 00:25:40.841 }, 00:25:40.841 "claimed": true, 00:25:40.841 "claim_type": "exclusive_write", 00:25:40.841 "zoned": false, 00:25:40.841 "supported_io_types": { 00:25:40.841 "read": true, 00:25:40.841 "write": true, 00:25:40.841 "unmap": true, 00:25:40.841 "flush": true, 00:25:40.841 "reset": true, 00:25:40.841 "nvme_admin": false, 00:25:40.841 "nvme_io": false, 00:25:40.841 "nvme_io_md": false, 00:25:40.841 "write_zeroes": true, 00:25:40.841 "zcopy": true, 00:25:40.841 "get_zone_info": false, 00:25:40.841 "zone_management": false, 00:25:40.841 "zone_append": false, 00:25:40.841 "compare": false, 00:25:40.841 "compare_and_write": false, 00:25:40.841 "abort": true, 00:25:40.841 "seek_hole": false, 00:25:40.841 "seek_data": false, 00:25:40.841 "copy": true, 00:25:40.841 "nvme_iov_md": false 00:25:40.841 }, 00:25:40.841 "memory_domains": [ 00:25:40.841 { 00:25:40.841 "dma_device_id": "system", 00:25:40.841 "dma_device_type": 1 00:25:40.841 }, 00:25:40.841 { 00:25:40.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:40.841 "dma_device_type": 2 00:25:40.841 } 00:25:40.841 ], 00:25:40.841 "driver_specific": {} 00:25:40.841 } 00:25:40.841 ] 00:25:40.841 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.841 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:25:40.841 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:40.841 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:40.841 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:40.841 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:40.841 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:40.841 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:40.841 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:40.841 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:40.841 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:40.842 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:40.842 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:40.842 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:40.842 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.842 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:40.842 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.842 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:40.842 "name": "Existed_Raid", 00:25:40.842 "uuid": "eddf8903-c9b2-4db5-889b-a5764574f1bc", 00:25:40.842 "strip_size_kb": 0, 00:25:40.842 "state": "configuring", 00:25:40.842 "raid_level": "raid1", 00:25:40.842 "superblock": true, 00:25:40.842 "num_base_bdevs": 2, 00:25:40.842 "num_base_bdevs_discovered": 1, 00:25:40.842 "num_base_bdevs_operational": 2, 00:25:40.842 "base_bdevs_list": [ 00:25:40.842 { 00:25:40.842 "name": "BaseBdev1", 00:25:40.842 "uuid": "3cc20143-7ee2-448f-a34a-bbc8de9826c3", 00:25:40.842 "is_configured": true, 00:25:40.842 "data_offset": 2048, 00:25:40.842 "data_size": 63488 00:25:40.842 }, 00:25:40.842 { 00:25:40.842 "name": "BaseBdev2", 00:25:40.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.842 "is_configured": false, 00:25:40.842 "data_offset": 0, 00:25:40.842 "data_size": 0 00:25:40.842 } 00:25:40.842 ] 00:25:40.842 }' 00:25:40.842 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:40.842 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:41.101 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:41.101 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.101 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:41.101 [2024-11-05 15:55:13.375911] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:41.101 [2024-11-05 15:55:13.376044] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:25:41.101 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.101 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:41.101 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.101 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:41.101 [2024-11-05 15:55:13.383957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:41.101 [2024-11-05 15:55:13.385444] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:41.101 [2024-11-05 15:55:13.385481] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:41.101 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.101 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:25:41.101 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:41.101 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:41.101 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:41.101 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:41.101 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:41.101 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:41.101 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:41.101 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:41.101 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:41.101 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:41.101 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:41.101 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:41.101 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:41.101 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.101 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:41.101 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.102 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:41.102 "name": "Existed_Raid", 00:25:41.102 "uuid": "bd622738-feed-4c24-a52f-b13213558361", 00:25:41.102 "strip_size_kb": 0, 00:25:41.102 "state": "configuring", 00:25:41.102 "raid_level": "raid1", 00:25:41.102 "superblock": true, 00:25:41.102 "num_base_bdevs": 2, 00:25:41.102 "num_base_bdevs_discovered": 1, 00:25:41.102 "num_base_bdevs_operational": 2, 00:25:41.102 "base_bdevs_list": [ 00:25:41.102 { 00:25:41.102 "name": "BaseBdev1", 00:25:41.102 "uuid": "3cc20143-7ee2-448f-a34a-bbc8de9826c3", 00:25:41.102 "is_configured": true, 00:25:41.102 "data_offset": 2048, 00:25:41.102 "data_size": 63488 00:25:41.102 }, 00:25:41.102 { 00:25:41.102 "name": "BaseBdev2", 00:25:41.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:41.102 "is_configured": false, 00:25:41.102 "data_offset": 0, 00:25:41.102 "data_size": 0 00:25:41.102 } 00:25:41.102 ] 00:25:41.102 }' 00:25:41.102 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:41.102 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:41.361 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:41.361 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.361 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:41.361 [2024-11-05 15:55:13.698196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:41.361 BaseBdev2 00:25:41.361 [2024-11-05 15:55:13.698475] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:41.361 [2024-11-05 15:55:13.698491] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:41.361 [2024-11-05 15:55:13.698704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:41.361 [2024-11-05 15:55:13.698813] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:41.361 [2024-11-05 15:55:13.698822] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:25:41.361 [2024-11-05 15:55:13.698949] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:41.361 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.361 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:25:41.361 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:25:41.361 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:41.361 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:25:41.361 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:41.361 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:41.361 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:25:41.361 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.361 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:41.361 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.361 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:41.361 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.361 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:41.361 [ 00:25:41.361 { 00:25:41.361 "name": "BaseBdev2", 00:25:41.361 "aliases": [ 00:25:41.361 "5711d4c0-8b17-40f9-8117-2492c61d7eed" 00:25:41.361 ], 00:25:41.361 "product_name": "Malloc disk", 00:25:41.361 "block_size": 512, 00:25:41.361 "num_blocks": 65536, 00:25:41.361 "uuid": "5711d4c0-8b17-40f9-8117-2492c61d7eed", 00:25:41.361 "assigned_rate_limits": { 00:25:41.361 "rw_ios_per_sec": 0, 00:25:41.361 "rw_mbytes_per_sec": 0, 00:25:41.361 "r_mbytes_per_sec": 0, 00:25:41.361 "w_mbytes_per_sec": 0 00:25:41.361 }, 00:25:41.361 "claimed": true, 00:25:41.361 "claim_type": "exclusive_write", 00:25:41.361 "zoned": false, 00:25:41.361 "supported_io_types": { 00:25:41.361 "read": true, 00:25:41.361 "write": true, 00:25:41.361 "unmap": true, 00:25:41.361 "flush": true, 00:25:41.361 "reset": true, 00:25:41.361 "nvme_admin": false, 00:25:41.362 "nvme_io": false, 00:25:41.362 "nvme_io_md": false, 00:25:41.362 "write_zeroes": true, 00:25:41.362 "zcopy": true, 00:25:41.362 "get_zone_info": false, 00:25:41.362 "zone_management": false, 00:25:41.362 "zone_append": false, 00:25:41.362 "compare": false, 00:25:41.362 "compare_and_write": false, 00:25:41.362 "abort": true, 00:25:41.362 "seek_hole": false, 00:25:41.362 "seek_data": false, 00:25:41.362 "copy": true, 00:25:41.362 "nvme_iov_md": false 00:25:41.362 }, 00:25:41.362 "memory_domains": [ 00:25:41.362 { 00:25:41.362 "dma_device_id": "system", 00:25:41.362 "dma_device_type": 1 00:25:41.362 }, 00:25:41.362 { 00:25:41.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:41.362 "dma_device_type": 2 00:25:41.362 } 00:25:41.362 ], 00:25:41.362 "driver_specific": {} 00:25:41.362 } 00:25:41.362 ] 00:25:41.362 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.362 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:25:41.362 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:41.362 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:41.362 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:25:41.362 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:41.362 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:41.362 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:41.362 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:41.362 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:41.362 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:41.362 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:41.362 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:41.362 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:41.362 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:41.362 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:41.362 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.362 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:41.362 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.362 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:41.362 "name": "Existed_Raid", 00:25:41.362 "uuid": "bd622738-feed-4c24-a52f-b13213558361", 00:25:41.362 "strip_size_kb": 0, 00:25:41.362 "state": "online", 00:25:41.362 "raid_level": "raid1", 00:25:41.362 "superblock": true, 00:25:41.362 "num_base_bdevs": 2, 00:25:41.362 "num_base_bdevs_discovered": 2, 00:25:41.362 "num_base_bdevs_operational": 2, 00:25:41.362 "base_bdevs_list": [ 00:25:41.362 { 00:25:41.362 "name": "BaseBdev1", 00:25:41.362 "uuid": "3cc20143-7ee2-448f-a34a-bbc8de9826c3", 00:25:41.362 "is_configured": true, 00:25:41.362 "data_offset": 2048, 00:25:41.362 "data_size": 63488 00:25:41.362 }, 00:25:41.362 { 00:25:41.362 "name": "BaseBdev2", 00:25:41.362 "uuid": "5711d4c0-8b17-40f9-8117-2492c61d7eed", 00:25:41.362 "is_configured": true, 00:25:41.362 "data_offset": 2048, 00:25:41.362 "data_size": 63488 00:25:41.362 } 00:25:41.362 ] 00:25:41.362 }' 00:25:41.362 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:41.362 15:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:41.622 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:25:41.622 15:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:41.622 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:41.622 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:41.622 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:25:41.622 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:41.622 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:41.622 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:41.622 15:55:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.622 15:55:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:41.622 [2024-11-05 15:55:14.010544] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:41.622 15:55:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.622 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:41.622 "name": "Existed_Raid", 00:25:41.622 "aliases": [ 00:25:41.622 "bd622738-feed-4c24-a52f-b13213558361" 00:25:41.622 ], 00:25:41.622 "product_name": "Raid Volume", 00:25:41.622 "block_size": 512, 00:25:41.622 "num_blocks": 63488, 00:25:41.622 "uuid": "bd622738-feed-4c24-a52f-b13213558361", 00:25:41.622 "assigned_rate_limits": { 00:25:41.622 "rw_ios_per_sec": 0, 00:25:41.622 "rw_mbytes_per_sec": 0, 00:25:41.622 "r_mbytes_per_sec": 0, 00:25:41.622 "w_mbytes_per_sec": 0 00:25:41.622 }, 00:25:41.622 "claimed": false, 00:25:41.622 "zoned": false, 00:25:41.622 "supported_io_types": { 00:25:41.622 "read": true, 00:25:41.622 "write": true, 00:25:41.622 "unmap": false, 00:25:41.622 "flush": false, 00:25:41.622 "reset": true, 00:25:41.622 "nvme_admin": false, 00:25:41.622 "nvme_io": false, 00:25:41.622 "nvme_io_md": false, 00:25:41.622 "write_zeroes": true, 00:25:41.622 "zcopy": false, 00:25:41.622 "get_zone_info": false, 00:25:41.622 "zone_management": false, 00:25:41.622 "zone_append": false, 00:25:41.622 "compare": false, 00:25:41.622 "compare_and_write": false, 00:25:41.622 "abort": false, 00:25:41.622 "seek_hole": false, 00:25:41.622 "seek_data": false, 00:25:41.622 "copy": false, 00:25:41.622 "nvme_iov_md": false 00:25:41.622 }, 00:25:41.622 "memory_domains": [ 00:25:41.622 { 00:25:41.622 "dma_device_id": "system", 00:25:41.622 "dma_device_type": 1 00:25:41.622 }, 00:25:41.622 { 00:25:41.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:41.622 "dma_device_type": 2 00:25:41.622 }, 00:25:41.622 { 00:25:41.622 "dma_device_id": "system", 00:25:41.622 "dma_device_type": 1 00:25:41.622 }, 00:25:41.622 { 00:25:41.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:41.622 "dma_device_type": 2 00:25:41.622 } 00:25:41.622 ], 00:25:41.622 "driver_specific": { 00:25:41.622 "raid": { 00:25:41.622 "uuid": "bd622738-feed-4c24-a52f-b13213558361", 00:25:41.622 "strip_size_kb": 0, 00:25:41.622 "state": "online", 00:25:41.622 "raid_level": "raid1", 00:25:41.622 "superblock": true, 00:25:41.622 "num_base_bdevs": 2, 00:25:41.622 "num_base_bdevs_discovered": 2, 00:25:41.622 "num_base_bdevs_operational": 2, 00:25:41.622 "base_bdevs_list": [ 00:25:41.622 { 00:25:41.622 "name": "BaseBdev1", 00:25:41.622 "uuid": "3cc20143-7ee2-448f-a34a-bbc8de9826c3", 00:25:41.622 "is_configured": true, 00:25:41.622 "data_offset": 2048, 00:25:41.622 "data_size": 63488 00:25:41.622 }, 00:25:41.622 { 00:25:41.623 "name": "BaseBdev2", 00:25:41.623 "uuid": "5711d4c0-8b17-40f9-8117-2492c61d7eed", 00:25:41.623 "is_configured": true, 00:25:41.623 "data_offset": 2048, 00:25:41.623 "data_size": 63488 00:25:41.623 } 00:25:41.623 ] 00:25:41.623 } 00:25:41.623 } 00:25:41.623 }' 00:25:41.881 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:41.881 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:25:41.881 BaseBdev2' 00:25:41.881 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:41.881 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:41.881 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:41.881 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:25:41.881 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:41.881 15:55:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.881 15:55:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:41.881 15:55:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.881 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:41.881 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:41.881 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:41.881 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:41.881 15:55:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.881 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:41.881 15:55:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:41.881 15:55:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.881 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:41.881 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:41.881 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:41.881 15:55:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.882 15:55:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:41.882 [2024-11-05 15:55:14.162344] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:41.882 15:55:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.882 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:25:41.882 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:25:41.882 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:41.882 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:25:41.882 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:25:41.882 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:25:41.882 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:41.882 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:41.882 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:41.882 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:41.882 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:41.882 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:41.882 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:41.882 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:41.882 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:41.882 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:41.882 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:41.882 15:55:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.882 15:55:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:41.882 15:55:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.882 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:41.882 "name": "Existed_Raid", 00:25:41.882 "uuid": "bd622738-feed-4c24-a52f-b13213558361", 00:25:41.882 "strip_size_kb": 0, 00:25:41.882 "state": "online", 00:25:41.882 "raid_level": "raid1", 00:25:41.882 "superblock": true, 00:25:41.882 "num_base_bdevs": 2, 00:25:41.882 "num_base_bdevs_discovered": 1, 00:25:41.882 "num_base_bdevs_operational": 1, 00:25:41.882 "base_bdevs_list": [ 00:25:41.882 { 00:25:41.882 "name": null, 00:25:41.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:41.882 "is_configured": false, 00:25:41.882 "data_offset": 0, 00:25:41.882 "data_size": 63488 00:25:41.882 }, 00:25:41.882 { 00:25:41.882 "name": "BaseBdev2", 00:25:41.882 "uuid": "5711d4c0-8b17-40f9-8117-2492c61d7eed", 00:25:41.882 "is_configured": true, 00:25:41.882 "data_offset": 2048, 00:25:41.882 "data_size": 63488 00:25:41.882 } 00:25:41.882 ] 00:25:41.882 }' 00:25:41.882 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:41.882 15:55:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:42.141 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:25:42.141 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:42.141 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:42.141 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:42.141 15:55:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.141 15:55:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:42.141 15:55:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.141 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:42.141 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:42.141 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:25:42.141 15:55:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.141 15:55:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:42.400 [2024-11-05 15:55:14.561022] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:42.400 [2024-11-05 15:55:14.561205] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:42.400 [2024-11-05 15:55:14.608435] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:42.400 [2024-11-05 15:55:14.608475] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:42.400 [2024-11-05 15:55:14.608483] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:25:42.400 15:55:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.400 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:42.400 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:42.400 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:42.400 15:55:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.400 15:55:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:42.400 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:25:42.400 15:55:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.400 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:25:42.400 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:25:42.400 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:25:42.400 15:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61420 00:25:42.400 15:55:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 61420 ']' 00:25:42.400 15:55:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 61420 00:25:42.400 15:55:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:25:42.400 15:55:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:42.400 15:55:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61420 00:25:42.400 killing process with pid 61420 00:25:42.400 15:55:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:42.400 15:55:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:42.400 15:55:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61420' 00:25:42.400 15:55:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 61420 00:25:42.400 [2024-11-05 15:55:14.669646] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:42.400 15:55:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 61420 00:25:42.400 [2024-11-05 15:55:14.678090] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:42.967 ************************************ 00:25:42.967 END TEST raid_state_function_test_sb 00:25:42.967 ************************************ 00:25:42.967 15:55:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:25:42.967 00:25:42.967 real 0m3.359s 00:25:42.967 user 0m4.889s 00:25:42.967 sys 0m0.505s 00:25:42.967 15:55:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:42.967 15:55:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:42.967 15:55:15 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:25:42.967 15:55:15 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:25:42.967 15:55:15 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:42.967 15:55:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:42.967 ************************************ 00:25:42.967 START TEST raid_superblock_test 00:25:42.967 ************************************ 00:25:42.967 15:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:25:42.967 15:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:25:42.967 15:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:25:42.968 15:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:25:42.968 15:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:25:42.968 15:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:25:42.968 15:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:25:42.968 15:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:25:42.968 15:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:25:42.968 15:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:25:42.968 15:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:25:42.968 15:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:25:42.968 15:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:25:42.968 15:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:25:42.968 15:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:25:42.968 15:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:25:42.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:42.968 15:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61657 00:25:42.968 15:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61657 00:25:42.968 15:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 61657 ']' 00:25:42.968 15:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:42.968 15:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:42.968 15:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:42.968 15:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:42.968 15:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:42.968 15:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:25:42.968 [2024-11-05 15:55:15.348456] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:25:42.968 [2024-11-05 15:55:15.348561] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61657 ] 00:25:43.226 [2024-11-05 15:55:15.497493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.226 [2024-11-05 15:55:15.583012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.485 [2024-11-05 15:55:15.693137] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:43.485 [2024-11-05 15:55:15.693187] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:44.051 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:44.051 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:25:44.051 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:25:44.051 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:44.051 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:25:44.051 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:25:44.051 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:44.051 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.052 malloc1 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.052 [2024-11-05 15:55:16.227379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:44.052 [2024-11-05 15:55:16.227435] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:44.052 [2024-11-05 15:55:16.227452] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:44.052 [2024-11-05 15:55:16.227459] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:44.052 [2024-11-05 15:55:16.229222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:44.052 [2024-11-05 15:55:16.229253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:44.052 pt1 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.052 malloc2 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.052 [2024-11-05 15:55:16.258894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:44.052 [2024-11-05 15:55:16.258939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:44.052 [2024-11-05 15:55:16.258955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:44.052 [2024-11-05 15:55:16.258962] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:44.052 [2024-11-05 15:55:16.260685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:44.052 [2024-11-05 15:55:16.260717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:44.052 pt2 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.052 [2024-11-05 15:55:16.266941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:44.052 [2024-11-05 15:55:16.268457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:44.052 [2024-11-05 15:55:16.268588] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:44.052 [2024-11-05 15:55:16.268601] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:44.052 [2024-11-05 15:55:16.268805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:44.052 [2024-11-05 15:55:16.268927] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:44.052 [2024-11-05 15:55:16.268939] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:44.052 [2024-11-05 15:55:16.269054] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.052 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:44.052 "name": "raid_bdev1", 00:25:44.052 "uuid": "1376c907-5ed0-412d-84e6-81031150080a", 00:25:44.052 "strip_size_kb": 0, 00:25:44.052 "state": "online", 00:25:44.052 "raid_level": "raid1", 00:25:44.052 "superblock": true, 00:25:44.052 "num_base_bdevs": 2, 00:25:44.052 "num_base_bdevs_discovered": 2, 00:25:44.052 "num_base_bdevs_operational": 2, 00:25:44.052 "base_bdevs_list": [ 00:25:44.052 { 00:25:44.052 "name": "pt1", 00:25:44.052 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:44.052 "is_configured": true, 00:25:44.052 "data_offset": 2048, 00:25:44.052 "data_size": 63488 00:25:44.053 }, 00:25:44.053 { 00:25:44.053 "name": "pt2", 00:25:44.053 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:44.053 "is_configured": true, 00:25:44.053 "data_offset": 2048, 00:25:44.053 "data_size": 63488 00:25:44.053 } 00:25:44.053 ] 00:25:44.053 }' 00:25:44.053 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:44.053 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.311 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:25:44.311 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:44.311 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:44.311 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:44.311 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:44.311 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:44.311 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:44.311 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:44.311 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.311 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.311 [2024-11-05 15:55:16.591221] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:44.311 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.311 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:44.311 "name": "raid_bdev1", 00:25:44.311 "aliases": [ 00:25:44.311 "1376c907-5ed0-412d-84e6-81031150080a" 00:25:44.311 ], 00:25:44.311 "product_name": "Raid Volume", 00:25:44.311 "block_size": 512, 00:25:44.311 "num_blocks": 63488, 00:25:44.311 "uuid": "1376c907-5ed0-412d-84e6-81031150080a", 00:25:44.311 "assigned_rate_limits": { 00:25:44.311 "rw_ios_per_sec": 0, 00:25:44.311 "rw_mbytes_per_sec": 0, 00:25:44.311 "r_mbytes_per_sec": 0, 00:25:44.311 "w_mbytes_per_sec": 0 00:25:44.311 }, 00:25:44.311 "claimed": false, 00:25:44.311 "zoned": false, 00:25:44.311 "supported_io_types": { 00:25:44.311 "read": true, 00:25:44.311 "write": true, 00:25:44.311 "unmap": false, 00:25:44.311 "flush": false, 00:25:44.311 "reset": true, 00:25:44.311 "nvme_admin": false, 00:25:44.311 "nvme_io": false, 00:25:44.311 "nvme_io_md": false, 00:25:44.311 "write_zeroes": true, 00:25:44.311 "zcopy": false, 00:25:44.311 "get_zone_info": false, 00:25:44.311 "zone_management": false, 00:25:44.311 "zone_append": false, 00:25:44.311 "compare": false, 00:25:44.311 "compare_and_write": false, 00:25:44.311 "abort": false, 00:25:44.311 "seek_hole": false, 00:25:44.311 "seek_data": false, 00:25:44.311 "copy": false, 00:25:44.311 "nvme_iov_md": false 00:25:44.311 }, 00:25:44.311 "memory_domains": [ 00:25:44.311 { 00:25:44.311 "dma_device_id": "system", 00:25:44.311 "dma_device_type": 1 00:25:44.311 }, 00:25:44.311 { 00:25:44.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:44.311 "dma_device_type": 2 00:25:44.311 }, 00:25:44.311 { 00:25:44.311 "dma_device_id": "system", 00:25:44.311 "dma_device_type": 1 00:25:44.311 }, 00:25:44.311 { 00:25:44.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:44.311 "dma_device_type": 2 00:25:44.311 } 00:25:44.311 ], 00:25:44.311 "driver_specific": { 00:25:44.311 "raid": { 00:25:44.311 "uuid": "1376c907-5ed0-412d-84e6-81031150080a", 00:25:44.311 "strip_size_kb": 0, 00:25:44.311 "state": "online", 00:25:44.311 "raid_level": "raid1", 00:25:44.311 "superblock": true, 00:25:44.311 "num_base_bdevs": 2, 00:25:44.311 "num_base_bdevs_discovered": 2, 00:25:44.311 "num_base_bdevs_operational": 2, 00:25:44.311 "base_bdevs_list": [ 00:25:44.311 { 00:25:44.311 "name": "pt1", 00:25:44.311 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:44.311 "is_configured": true, 00:25:44.311 "data_offset": 2048, 00:25:44.311 "data_size": 63488 00:25:44.311 }, 00:25:44.311 { 00:25:44.311 "name": "pt2", 00:25:44.311 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:44.311 "is_configured": true, 00:25:44.311 "data_offset": 2048, 00:25:44.311 "data_size": 63488 00:25:44.311 } 00:25:44.311 ] 00:25:44.311 } 00:25:44.311 } 00:25:44.311 }' 00:25:44.311 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:44.311 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:44.311 pt2' 00:25:44.311 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:44.311 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:44.311 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:44.311 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:44.311 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:44.311 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.311 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.311 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.311 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:44.311 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:44.311 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:44.311 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:44.311 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.312 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.312 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:44.570 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.570 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:44.570 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:44.570 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:44.570 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:25:44.570 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.570 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.570 [2024-11-05 15:55:16.751237] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:44.570 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.570 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1376c907-5ed0-412d-84e6-81031150080a 00:25:44.570 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1376c907-5ed0-412d-84e6-81031150080a ']' 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.571 [2024-11-05 15:55:16.782997] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:44.571 [2024-11-05 15:55:16.783016] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:44.571 [2024-11-05 15:55:16.783079] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:44.571 [2024-11-05 15:55:16.783128] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:44.571 [2024-11-05 15:55:16.783138] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.571 [2024-11-05 15:55:16.879051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:44.571 [2024-11-05 15:55:16.880650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:44.571 [2024-11-05 15:55:16.880707] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:44.571 [2024-11-05 15:55:16.880751] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:44.571 [2024-11-05 15:55:16.880762] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:44.571 [2024-11-05 15:55:16.880771] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:25:44.571 request: 00:25:44.571 { 00:25:44.571 "name": "raid_bdev1", 00:25:44.571 "raid_level": "raid1", 00:25:44.571 "base_bdevs": [ 00:25:44.571 "malloc1", 00:25:44.571 "malloc2" 00:25:44.571 ], 00:25:44.571 "superblock": false, 00:25:44.571 "method": "bdev_raid_create", 00:25:44.571 "req_id": 1 00:25:44.571 } 00:25:44.571 Got JSON-RPC error response 00:25:44.571 response: 00:25:44.571 { 00:25:44.571 "code": -17, 00:25:44.571 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:44.571 } 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.571 [2024-11-05 15:55:16.923058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:44.571 [2024-11-05 15:55:16.923109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:44.571 [2024-11-05 15:55:16.923123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:44.571 [2024-11-05 15:55:16.923132] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:44.571 [2024-11-05 15:55:16.924960] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:44.571 [2024-11-05 15:55:16.924993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:44.571 [2024-11-05 15:55:16.925055] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:44.571 [2024-11-05 15:55:16.925100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:44.571 pt1 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.571 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:44.571 "name": "raid_bdev1", 00:25:44.571 "uuid": "1376c907-5ed0-412d-84e6-81031150080a", 00:25:44.571 "strip_size_kb": 0, 00:25:44.571 "state": "configuring", 00:25:44.571 "raid_level": "raid1", 00:25:44.571 "superblock": true, 00:25:44.571 "num_base_bdevs": 2, 00:25:44.572 "num_base_bdevs_discovered": 1, 00:25:44.572 "num_base_bdevs_operational": 2, 00:25:44.572 "base_bdevs_list": [ 00:25:44.572 { 00:25:44.572 "name": "pt1", 00:25:44.572 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:44.572 "is_configured": true, 00:25:44.572 "data_offset": 2048, 00:25:44.572 "data_size": 63488 00:25:44.572 }, 00:25:44.572 { 00:25:44.572 "name": null, 00:25:44.572 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:44.572 "is_configured": false, 00:25:44.572 "data_offset": 2048, 00:25:44.572 "data_size": 63488 00:25:44.572 } 00:25:44.572 ] 00:25:44.572 }' 00:25:44.572 15:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:44.572 15:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.135 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:25:45.135 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:25:45.135 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:45.135 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:45.135 15:55:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.135 15:55:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.135 [2024-11-05 15:55:17.263116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:45.135 [2024-11-05 15:55:17.263176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:45.135 [2024-11-05 15:55:17.263190] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:45.135 [2024-11-05 15:55:17.263199] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:45.135 [2024-11-05 15:55:17.263543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:45.135 [2024-11-05 15:55:17.263555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:45.135 [2024-11-05 15:55:17.263613] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:45.135 [2024-11-05 15:55:17.263630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:45.135 [2024-11-05 15:55:17.263718] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:45.135 [2024-11-05 15:55:17.263727] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:45.136 [2024-11-05 15:55:17.263928] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:45.136 [2024-11-05 15:55:17.264038] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:45.136 [2024-11-05 15:55:17.264045] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:25:45.136 [2024-11-05 15:55:17.264149] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:45.136 pt2 00:25:45.136 15:55:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.136 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:25:45.136 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:45.136 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:45.136 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:45.136 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:45.136 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:45.136 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:45.136 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:45.136 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:45.136 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:45.136 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:45.136 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:45.136 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:45.136 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:45.136 15:55:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.136 15:55:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.136 15:55:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.136 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:45.136 "name": "raid_bdev1", 00:25:45.136 "uuid": "1376c907-5ed0-412d-84e6-81031150080a", 00:25:45.136 "strip_size_kb": 0, 00:25:45.136 "state": "online", 00:25:45.136 "raid_level": "raid1", 00:25:45.136 "superblock": true, 00:25:45.136 "num_base_bdevs": 2, 00:25:45.136 "num_base_bdevs_discovered": 2, 00:25:45.136 "num_base_bdevs_operational": 2, 00:25:45.136 "base_bdevs_list": [ 00:25:45.136 { 00:25:45.136 "name": "pt1", 00:25:45.136 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:45.136 "is_configured": true, 00:25:45.136 "data_offset": 2048, 00:25:45.136 "data_size": 63488 00:25:45.136 }, 00:25:45.136 { 00:25:45.136 "name": "pt2", 00:25:45.136 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:45.136 "is_configured": true, 00:25:45.136 "data_offset": 2048, 00:25:45.136 "data_size": 63488 00:25:45.136 } 00:25:45.136 ] 00:25:45.136 }' 00:25:45.136 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:45.136 15:55:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.416 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:25:45.416 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:45.416 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:45.416 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:45.416 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:45.416 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:45.416 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:45.416 15:55:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.416 15:55:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.416 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:45.416 [2024-11-05 15:55:17.623387] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:45.417 15:55:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.417 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:45.417 "name": "raid_bdev1", 00:25:45.417 "aliases": [ 00:25:45.417 "1376c907-5ed0-412d-84e6-81031150080a" 00:25:45.417 ], 00:25:45.417 "product_name": "Raid Volume", 00:25:45.417 "block_size": 512, 00:25:45.417 "num_blocks": 63488, 00:25:45.417 "uuid": "1376c907-5ed0-412d-84e6-81031150080a", 00:25:45.417 "assigned_rate_limits": { 00:25:45.417 "rw_ios_per_sec": 0, 00:25:45.417 "rw_mbytes_per_sec": 0, 00:25:45.417 "r_mbytes_per_sec": 0, 00:25:45.417 "w_mbytes_per_sec": 0 00:25:45.417 }, 00:25:45.417 "claimed": false, 00:25:45.417 "zoned": false, 00:25:45.417 "supported_io_types": { 00:25:45.417 "read": true, 00:25:45.417 "write": true, 00:25:45.417 "unmap": false, 00:25:45.417 "flush": false, 00:25:45.417 "reset": true, 00:25:45.417 "nvme_admin": false, 00:25:45.417 "nvme_io": false, 00:25:45.417 "nvme_io_md": false, 00:25:45.417 "write_zeroes": true, 00:25:45.417 "zcopy": false, 00:25:45.417 "get_zone_info": false, 00:25:45.417 "zone_management": false, 00:25:45.417 "zone_append": false, 00:25:45.417 "compare": false, 00:25:45.417 "compare_and_write": false, 00:25:45.417 "abort": false, 00:25:45.417 "seek_hole": false, 00:25:45.417 "seek_data": false, 00:25:45.417 "copy": false, 00:25:45.417 "nvme_iov_md": false 00:25:45.417 }, 00:25:45.417 "memory_domains": [ 00:25:45.417 { 00:25:45.417 "dma_device_id": "system", 00:25:45.417 "dma_device_type": 1 00:25:45.417 }, 00:25:45.417 { 00:25:45.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:45.417 "dma_device_type": 2 00:25:45.417 }, 00:25:45.417 { 00:25:45.417 "dma_device_id": "system", 00:25:45.417 "dma_device_type": 1 00:25:45.417 }, 00:25:45.417 { 00:25:45.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:45.417 "dma_device_type": 2 00:25:45.417 } 00:25:45.417 ], 00:25:45.417 "driver_specific": { 00:25:45.417 "raid": { 00:25:45.417 "uuid": "1376c907-5ed0-412d-84e6-81031150080a", 00:25:45.417 "strip_size_kb": 0, 00:25:45.417 "state": "online", 00:25:45.417 "raid_level": "raid1", 00:25:45.417 "superblock": true, 00:25:45.417 "num_base_bdevs": 2, 00:25:45.417 "num_base_bdevs_discovered": 2, 00:25:45.417 "num_base_bdevs_operational": 2, 00:25:45.417 "base_bdevs_list": [ 00:25:45.417 { 00:25:45.417 "name": "pt1", 00:25:45.417 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:45.417 "is_configured": true, 00:25:45.417 "data_offset": 2048, 00:25:45.417 "data_size": 63488 00:25:45.417 }, 00:25:45.417 { 00:25:45.417 "name": "pt2", 00:25:45.417 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:45.417 "is_configured": true, 00:25:45.417 "data_offset": 2048, 00:25:45.417 "data_size": 63488 00:25:45.417 } 00:25:45.417 ] 00:25:45.417 } 00:25:45.417 } 00:25:45.417 }' 00:25:45.417 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:45.417 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:45.417 pt2' 00:25:45.417 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:45.417 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:45.417 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:45.417 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:45.417 15:55:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.417 15:55:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.417 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:45.417 15:55:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.417 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:45.417 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:45.417 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:45.417 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:45.417 15:55:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.417 15:55:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.417 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:45.417 15:55:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.417 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:45.417 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:45.417 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:45.417 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:25:45.417 15:55:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.417 15:55:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.417 [2024-11-05 15:55:17.795417] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:45.417 15:55:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.688 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1376c907-5ed0-412d-84e6-81031150080a '!=' 1376c907-5ed0-412d-84e6-81031150080a ']' 00:25:45.688 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:25:45.688 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:45.688 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:25:45.688 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:25:45.688 15:55:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.688 15:55:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.688 [2024-11-05 15:55:17.827255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:25:45.688 15:55:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.688 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:45.688 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:45.688 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:45.688 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:45.688 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:45.688 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:45.688 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:45.688 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:45.688 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:45.688 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:45.688 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:45.688 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:45.688 15:55:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.688 15:55:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.688 15:55:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.688 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:45.688 "name": "raid_bdev1", 00:25:45.688 "uuid": "1376c907-5ed0-412d-84e6-81031150080a", 00:25:45.688 "strip_size_kb": 0, 00:25:45.688 "state": "online", 00:25:45.688 "raid_level": "raid1", 00:25:45.688 "superblock": true, 00:25:45.688 "num_base_bdevs": 2, 00:25:45.688 "num_base_bdevs_discovered": 1, 00:25:45.688 "num_base_bdevs_operational": 1, 00:25:45.688 "base_bdevs_list": [ 00:25:45.688 { 00:25:45.688 "name": null, 00:25:45.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:45.688 "is_configured": false, 00:25:45.688 "data_offset": 0, 00:25:45.688 "data_size": 63488 00:25:45.688 }, 00:25:45.688 { 00:25:45.688 "name": "pt2", 00:25:45.688 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:45.688 "is_configured": true, 00:25:45.688 "data_offset": 2048, 00:25:45.688 "data_size": 63488 00:25:45.688 } 00:25:45.688 ] 00:25:45.688 }' 00:25:45.688 15:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:45.688 15:55:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.947 [2024-11-05 15:55:18.159291] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:45.947 [2024-11-05 15:55:18.159423] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:45.947 [2024-11-05 15:55:18.159546] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:45.947 [2024-11-05 15:55:18.159633] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:45.947 [2024-11-05 15:55:18.159648] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.947 [2024-11-05 15:55:18.215293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:45.947 [2024-11-05 15:55:18.215348] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:45.947 [2024-11-05 15:55:18.215361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:25:45.947 [2024-11-05 15:55:18.215370] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:45.947 [2024-11-05 15:55:18.217294] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:45.947 [2024-11-05 15:55:18.217399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:45.947 [2024-11-05 15:55:18.217513] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:45.947 [2024-11-05 15:55:18.217606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:45.947 [2024-11-05 15:55:18.217733] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:25:45.947 [2024-11-05 15:55:18.217785] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:45.947 [2024-11-05 15:55:18.218007] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:45.947 [2024-11-05 15:55:18.218186] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:25:45.947 [2024-11-05 15:55:18.218241] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:25:45.947 [2024-11-05 15:55:18.218359] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:45.947 pt2 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:45.947 "name": "raid_bdev1", 00:25:45.947 "uuid": "1376c907-5ed0-412d-84e6-81031150080a", 00:25:45.947 "strip_size_kb": 0, 00:25:45.947 "state": "online", 00:25:45.947 "raid_level": "raid1", 00:25:45.947 "superblock": true, 00:25:45.947 "num_base_bdevs": 2, 00:25:45.947 "num_base_bdevs_discovered": 1, 00:25:45.947 "num_base_bdevs_operational": 1, 00:25:45.947 "base_bdevs_list": [ 00:25:45.947 { 00:25:45.947 "name": null, 00:25:45.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:45.947 "is_configured": false, 00:25:45.947 "data_offset": 2048, 00:25:45.947 "data_size": 63488 00:25:45.947 }, 00:25:45.947 { 00:25:45.947 "name": "pt2", 00:25:45.947 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:45.947 "is_configured": true, 00:25:45.947 "data_offset": 2048, 00:25:45.947 "data_size": 63488 00:25:45.947 } 00:25:45.947 ] 00:25:45.947 }' 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:45.947 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:46.205 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:46.205 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.205 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:46.205 [2024-11-05 15:55:18.547333] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:46.205 [2024-11-05 15:55:18.547460] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:46.206 [2024-11-05 15:55:18.547526] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:46.206 [2024-11-05 15:55:18.547566] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:46.206 [2024-11-05 15:55:18.547574] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:25:46.206 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.206 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:25:46.206 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:46.206 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.206 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:46.206 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.206 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:25:46.206 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:25:46.206 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:25:46.206 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:46.206 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.206 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:46.206 [2024-11-05 15:55:18.587370] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:46.206 [2024-11-05 15:55:18.587421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:46.206 [2024-11-05 15:55:18.587436] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:25:46.206 [2024-11-05 15:55:18.587443] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:46.206 [2024-11-05 15:55:18.589243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:46.206 [2024-11-05 15:55:18.589273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:46.206 [2024-11-05 15:55:18.589339] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:46.206 [2024-11-05 15:55:18.589371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:46.206 [2024-11-05 15:55:18.589468] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:25:46.206 [2024-11-05 15:55:18.589476] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:46.206 [2024-11-05 15:55:18.589488] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:25:46.206 [2024-11-05 15:55:18.589526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:46.206 [2024-11-05 15:55:18.589580] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:25:46.206 [2024-11-05 15:55:18.589587] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:46.206 [2024-11-05 15:55:18.589787] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:25:46.206 [2024-11-05 15:55:18.589905] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:25:46.206 [2024-11-05 15:55:18.589913] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:25:46.206 [2024-11-05 15:55:18.590023] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:46.206 pt1 00:25:46.206 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.206 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:25:46.206 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:46.206 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:46.206 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:46.206 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:46.206 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:46.206 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:46.206 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:46.206 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:46.206 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:46.206 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:46.206 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:46.206 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.206 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:46.206 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:46.206 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.464 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:46.464 "name": "raid_bdev1", 00:25:46.464 "uuid": "1376c907-5ed0-412d-84e6-81031150080a", 00:25:46.464 "strip_size_kb": 0, 00:25:46.464 "state": "online", 00:25:46.464 "raid_level": "raid1", 00:25:46.464 "superblock": true, 00:25:46.464 "num_base_bdevs": 2, 00:25:46.464 "num_base_bdevs_discovered": 1, 00:25:46.464 "num_base_bdevs_operational": 1, 00:25:46.464 "base_bdevs_list": [ 00:25:46.464 { 00:25:46.464 "name": null, 00:25:46.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:46.464 "is_configured": false, 00:25:46.464 "data_offset": 2048, 00:25:46.464 "data_size": 63488 00:25:46.464 }, 00:25:46.464 { 00:25:46.464 "name": "pt2", 00:25:46.464 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:46.464 "is_configured": true, 00:25:46.464 "data_offset": 2048, 00:25:46.464 "data_size": 63488 00:25:46.464 } 00:25:46.464 ] 00:25:46.464 }' 00:25:46.464 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:46.464 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:46.723 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:25:46.723 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.723 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:46.723 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:25:46.723 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.723 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:25:46.723 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:25:46.723 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:46.723 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.723 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:46.723 [2024-11-05 15:55:18.955613] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:46.723 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.723 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 1376c907-5ed0-412d-84e6-81031150080a '!=' 1376c907-5ed0-412d-84e6-81031150080a ']' 00:25:46.723 15:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61657 00:25:46.723 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 61657 ']' 00:25:46.723 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 61657 00:25:46.723 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:25:46.723 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:46.723 15:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61657 00:25:46.723 killing process with pid 61657 00:25:46.723 15:55:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:46.723 15:55:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:46.723 15:55:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61657' 00:25:46.723 15:55:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 61657 00:25:46.723 [2024-11-05 15:55:19.005567] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:46.723 15:55:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 61657 00:25:46.723 [2024-11-05 15:55:19.005636] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:46.723 [2024-11-05 15:55:19.005673] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:46.723 [2024-11-05 15:55:19.005684] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:25:46.723 [2024-11-05 15:55:19.108223] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:47.289 15:55:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:25:47.289 00:25:47.289 real 0m4.383s 00:25:47.289 user 0m6.803s 00:25:47.289 sys 0m0.655s 00:25:47.289 15:55:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:47.289 ************************************ 00:25:47.289 END TEST raid_superblock_test 00:25:47.289 15:55:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:47.289 ************************************ 00:25:47.547 15:55:19 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:25:47.547 15:55:19 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:25:47.547 15:55:19 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:47.547 15:55:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:47.547 ************************************ 00:25:47.547 START TEST raid_read_error_test 00:25:47.547 ************************************ 00:25:47.547 15:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 read 00:25:47.547 15:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:25:47.547 15:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:25:47.547 15:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:25:47.547 15:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:25:47.547 15:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:47.547 15:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:25:47.547 15:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:47.547 15:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:47.547 15:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:25:47.547 15:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:47.547 15:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:47.547 15:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:47.547 15:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:25:47.547 15:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:25:47.547 15:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:25:47.547 15:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:25:47.547 15:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:25:47.547 15:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:25:47.547 15:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:25:47.547 15:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:25:47.547 15:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:25:47.547 15:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.WfOQ8Tc6Bi 00:25:47.547 15:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61969 00:25:47.547 15:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61969 00:25:47.547 15:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 61969 ']' 00:25:47.547 15:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:47.547 15:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:47.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:47.547 15:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:47.547 15:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:47.547 15:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:47.547 15:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:25:47.547 [2024-11-05 15:55:19.789667] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:25:47.547 [2024-11-05 15:55:19.789793] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61969 ] 00:25:47.547 [2024-11-05 15:55:19.946286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.807 [2024-11-05 15:55:20.047668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.807 [2024-11-05 15:55:20.186510] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:47.807 [2024-11-05 15:55:20.186659] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:48.396 15:55:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:48.396 15:55:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:25:48.396 15:55:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:48.396 15:55:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:48.396 15:55:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.396 15:55:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:48.396 BaseBdev1_malloc 00:25:48.396 15:55:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.396 15:55:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:25:48.396 15:55:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.396 15:55:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:48.396 true 00:25:48.396 15:55:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.396 15:55:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:25:48.396 15:55:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.396 15:55:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:48.396 [2024-11-05 15:55:20.635815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:25:48.396 [2024-11-05 15:55:20.635889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:48.396 [2024-11-05 15:55:20.635909] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:25:48.396 [2024-11-05 15:55:20.635920] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:48.396 [2024-11-05 15:55:20.638021] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:48.396 [2024-11-05 15:55:20.638057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:48.396 BaseBdev1 00:25:48.396 15:55:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.396 15:55:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:48.396 15:55:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:48.396 15:55:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.396 15:55:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:48.396 BaseBdev2_malloc 00:25:48.396 15:55:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.396 15:55:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:25:48.396 15:55:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.396 15:55:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:48.396 true 00:25:48.396 15:55:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.396 15:55:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:25:48.396 15:55:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.396 15:55:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:48.396 [2024-11-05 15:55:20.680219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:25:48.396 [2024-11-05 15:55:20.680274] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:48.396 [2024-11-05 15:55:20.680291] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:25:48.396 [2024-11-05 15:55:20.680301] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:48.396 [2024-11-05 15:55:20.682442] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:48.396 [2024-11-05 15:55:20.682480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:48.396 BaseBdev2 00:25:48.396 15:55:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.396 15:55:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:25:48.396 15:55:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.396 15:55:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:48.396 [2024-11-05 15:55:20.688281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:48.396 [2024-11-05 15:55:20.690168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:48.396 [2024-11-05 15:55:20.690356] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:48.396 [2024-11-05 15:55:20.690370] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:48.396 [2024-11-05 15:55:20.690613] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:25:48.396 [2024-11-05 15:55:20.690766] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:48.396 [2024-11-05 15:55:20.690775] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:25:48.396 [2024-11-05 15:55:20.690939] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:48.397 15:55:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.397 15:55:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:48.397 15:55:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:48.397 15:55:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:48.397 15:55:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:48.397 15:55:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:48.397 15:55:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:48.397 15:55:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:48.397 15:55:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:48.397 15:55:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:48.397 15:55:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:48.397 15:55:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:48.397 15:55:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:48.397 15:55:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.397 15:55:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:48.397 15:55:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.397 15:55:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:48.397 "name": "raid_bdev1", 00:25:48.397 "uuid": "bf216193-3d48-4d2e-89b5-aae1461365ea", 00:25:48.397 "strip_size_kb": 0, 00:25:48.397 "state": "online", 00:25:48.397 "raid_level": "raid1", 00:25:48.397 "superblock": true, 00:25:48.397 "num_base_bdevs": 2, 00:25:48.397 "num_base_bdevs_discovered": 2, 00:25:48.397 "num_base_bdevs_operational": 2, 00:25:48.397 "base_bdevs_list": [ 00:25:48.397 { 00:25:48.397 "name": "BaseBdev1", 00:25:48.397 "uuid": "118a0504-4600-5587-a47d-bc5e4cffc42d", 00:25:48.397 "is_configured": true, 00:25:48.397 "data_offset": 2048, 00:25:48.397 "data_size": 63488 00:25:48.397 }, 00:25:48.397 { 00:25:48.397 "name": "BaseBdev2", 00:25:48.397 "uuid": "1bbc4721-5ca2-5ab7-83c4-f8c02be00d25", 00:25:48.397 "is_configured": true, 00:25:48.397 "data_offset": 2048, 00:25:48.397 "data_size": 63488 00:25:48.397 } 00:25:48.397 ] 00:25:48.397 }' 00:25:48.397 15:55:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:48.397 15:55:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:48.654 15:55:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:25:48.654 15:55:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:25:48.912 [2024-11-05 15:55:21.101327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:25:49.843 15:55:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:25:49.843 15:55:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.843 15:55:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:49.843 15:55:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.843 15:55:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:25:49.843 15:55:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:25:49.843 15:55:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:25:49.844 15:55:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:25:49.844 15:55:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:49.844 15:55:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:49.844 15:55:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:49.844 15:55:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:49.844 15:55:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:49.844 15:55:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:49.844 15:55:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:49.844 15:55:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:49.844 15:55:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:49.844 15:55:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:49.844 15:55:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:49.844 15:55:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:49.844 15:55:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.844 15:55:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:49.844 15:55:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.844 15:55:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:49.844 "name": "raid_bdev1", 00:25:49.844 "uuid": "bf216193-3d48-4d2e-89b5-aae1461365ea", 00:25:49.844 "strip_size_kb": 0, 00:25:49.844 "state": "online", 00:25:49.844 "raid_level": "raid1", 00:25:49.844 "superblock": true, 00:25:49.844 "num_base_bdevs": 2, 00:25:49.844 "num_base_bdevs_discovered": 2, 00:25:49.844 "num_base_bdevs_operational": 2, 00:25:49.844 "base_bdevs_list": [ 00:25:49.844 { 00:25:49.844 "name": "BaseBdev1", 00:25:49.844 "uuid": "118a0504-4600-5587-a47d-bc5e4cffc42d", 00:25:49.844 "is_configured": true, 00:25:49.844 "data_offset": 2048, 00:25:49.844 "data_size": 63488 00:25:49.844 }, 00:25:49.844 { 00:25:49.844 "name": "BaseBdev2", 00:25:49.844 "uuid": "1bbc4721-5ca2-5ab7-83c4-f8c02be00d25", 00:25:49.844 "is_configured": true, 00:25:49.844 "data_offset": 2048, 00:25:49.844 "data_size": 63488 00:25:49.844 } 00:25:49.844 ] 00:25:49.844 }' 00:25:49.844 15:55:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:49.844 15:55:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:50.103 15:55:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:50.103 15:55:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.103 15:55:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:50.103 [2024-11-05 15:55:22.306834] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:50.103 [2024-11-05 15:55:22.307003] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:50.103 [2024-11-05 15:55:22.310080] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:50.103 [2024-11-05 15:55:22.310224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:50.103 [2024-11-05 15:55:22.310359] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:50.103 [2024-11-05 15:55:22.310447] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:25:50.103 { 00:25:50.103 "results": [ 00:25:50.103 { 00:25:50.103 "job": "raid_bdev1", 00:25:50.103 "core_mask": "0x1", 00:25:50.103 "workload": "randrw", 00:25:50.103 "percentage": 50, 00:25:50.103 "status": "finished", 00:25:50.103 "queue_depth": 1, 00:25:50.103 "io_size": 131072, 00:25:50.103 "runtime": 1.203707, 00:25:50.103 "iops": 17912.997099792556, 00:25:50.103 "mibps": 2239.1246374740695, 00:25:50.103 "io_failed": 0, 00:25:50.103 "io_timeout": 0, 00:25:50.103 "avg_latency_us": 52.67664452419856, 00:25:50.103 "min_latency_us": 29.341538461538462, 00:25:50.103 "max_latency_us": 1701.4153846153847 00:25:50.103 } 00:25:50.103 ], 00:25:50.103 "core_count": 1 00:25:50.103 } 00:25:50.103 15:55:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.103 15:55:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61969 00:25:50.103 15:55:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 61969 ']' 00:25:50.103 15:55:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 61969 00:25:50.103 15:55:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:25:50.103 15:55:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:50.103 15:55:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61969 00:25:50.103 killing process with pid 61969 00:25:50.103 15:55:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:50.103 15:55:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:50.103 15:55:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61969' 00:25:50.103 15:55:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 61969 00:25:50.103 [2024-11-05 15:55:22.336753] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:50.103 15:55:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 61969 00:25:50.103 [2024-11-05 15:55:22.421542] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:51.036 15:55:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:25:51.036 15:55:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.WfOQ8Tc6Bi 00:25:51.036 15:55:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:25:51.036 15:55:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:25:51.037 15:55:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:25:51.037 15:55:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:51.037 15:55:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:25:51.037 15:55:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:25:51.037 00:25:51.037 real 0m3.412s 00:25:51.037 user 0m4.065s 00:25:51.037 sys 0m0.367s 00:25:51.037 ************************************ 00:25:51.037 END TEST raid_read_error_test 00:25:51.037 ************************************ 00:25:51.037 15:55:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:51.037 15:55:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:51.037 15:55:23 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:25:51.037 15:55:23 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:25:51.037 15:55:23 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:51.037 15:55:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:51.037 ************************************ 00:25:51.037 START TEST raid_write_error_test 00:25:51.037 ************************************ 00:25:51.037 15:55:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 write 00:25:51.037 15:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:25:51.037 15:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:25:51.037 15:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:25:51.037 15:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:25:51.037 15:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:51.037 15:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:25:51.037 15:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:51.037 15:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:51.037 15:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:25:51.037 15:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:51.037 15:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:51.037 15:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:51.037 15:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:25:51.037 15:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:25:51.037 15:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:25:51.037 15:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:25:51.037 15:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:25:51.037 15:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:25:51.037 15:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:25:51.037 15:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:25:51.037 15:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:25:51.037 15:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.YzNxzMGR3Y 00:25:51.037 15:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62099 00:25:51.037 15:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62099 00:25:51.037 15:55:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 62099 ']' 00:25:51.037 15:55:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:51.037 15:55:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:51.037 15:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:25:51.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:51.037 15:55:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:51.037 15:55:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:51.037 15:55:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:51.037 [2024-11-05 15:55:23.230185] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:25:51.037 [2024-11-05 15:55:23.230279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62099 ] 00:25:51.037 [2024-11-05 15:55:23.380915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.295 [2024-11-05 15:55:23.465633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:51.295 [2024-11-05 15:55:23.577686] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:51.295 [2024-11-05 15:55:23.577727] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:51.603 15:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:51.603 15:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:25:51.603 15:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:51.603 15:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:51.603 15:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.603 15:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:51.861 BaseBdev1_malloc 00:25:51.861 15:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.861 15:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:25:51.861 15:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.861 15:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:51.861 true 00:25:51.861 15:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.861 15:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:25:51.861 15:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.861 15:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:51.861 [2024-11-05 15:55:24.047578] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:25:51.861 [2024-11-05 15:55:24.047631] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:51.861 [2024-11-05 15:55:24.047647] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:25:51.861 [2024-11-05 15:55:24.047656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:51.861 [2024-11-05 15:55:24.049465] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:51.861 [2024-11-05 15:55:24.049498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:51.861 BaseBdev1 00:25:51.861 15:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.861 15:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:51.861 15:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:51.861 15:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.861 15:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:51.861 BaseBdev2_malloc 00:25:51.861 15:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.861 15:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:25:51.861 15:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.861 15:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:51.861 true 00:25:51.861 15:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.861 15:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:25:51.861 15:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.861 15:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:51.861 [2024-11-05 15:55:24.086923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:25:51.861 [2024-11-05 15:55:24.086971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:51.861 [2024-11-05 15:55:24.086985] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:25:51.861 [2024-11-05 15:55:24.086993] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:51.861 [2024-11-05 15:55:24.088728] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:51.861 [2024-11-05 15:55:24.088761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:51.861 BaseBdev2 00:25:51.861 15:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.861 15:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:25:51.861 15:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.861 15:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:51.861 [2024-11-05 15:55:24.094977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:51.861 [2024-11-05 15:55:24.096509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:51.861 [2024-11-05 15:55:24.096669] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:51.861 [2024-11-05 15:55:24.096680] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:51.861 [2024-11-05 15:55:24.096897] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:25:51.861 [2024-11-05 15:55:24.097028] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:51.861 [2024-11-05 15:55:24.097035] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:25:51.861 [2024-11-05 15:55:24.097152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:51.861 15:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.861 15:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:51.861 15:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:51.862 15:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:51.862 15:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:51.862 15:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:51.862 15:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:51.862 15:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:51.862 15:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:51.862 15:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:51.862 15:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:51.862 15:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:51.862 15:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:51.862 15:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.862 15:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:51.862 15:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.862 15:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:51.862 "name": "raid_bdev1", 00:25:51.862 "uuid": "75a3db33-a772-4b69-acf8-d79e62a1ca0d", 00:25:51.862 "strip_size_kb": 0, 00:25:51.862 "state": "online", 00:25:51.862 "raid_level": "raid1", 00:25:51.862 "superblock": true, 00:25:51.862 "num_base_bdevs": 2, 00:25:51.862 "num_base_bdevs_discovered": 2, 00:25:51.862 "num_base_bdevs_operational": 2, 00:25:51.862 "base_bdevs_list": [ 00:25:51.862 { 00:25:51.862 "name": "BaseBdev1", 00:25:51.862 "uuid": "031ae101-f9bd-508e-8600-a58449360db9", 00:25:51.862 "is_configured": true, 00:25:51.862 "data_offset": 2048, 00:25:51.862 "data_size": 63488 00:25:51.862 }, 00:25:51.862 { 00:25:51.862 "name": "BaseBdev2", 00:25:51.862 "uuid": "4a314429-54b8-5c1f-b99a-ae76d647b091", 00:25:51.862 "is_configured": true, 00:25:51.862 "data_offset": 2048, 00:25:51.862 "data_size": 63488 00:25:51.862 } 00:25:51.862 ] 00:25:51.862 }' 00:25:51.862 15:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:51.862 15:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:52.120 15:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:25:52.120 15:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:25:52.120 [2024-11-05 15:55:24.507832] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:25:53.055 15:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:25:53.055 15:55:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.055 15:55:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:53.055 [2024-11-05 15:55:25.428419] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:25:53.055 [2024-11-05 15:55:25.428472] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:53.055 [2024-11-05 15:55:25.428645] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:25:53.055 15:55:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.055 15:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:25:53.055 15:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:25:53.055 15:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:25:53.055 15:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:25:53.055 15:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:53.055 15:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:53.055 15:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:53.055 15:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:53.055 15:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:53.056 15:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:53.056 15:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:53.056 15:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:53.056 15:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:53.056 15:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:53.056 15:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:53.056 15:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:53.056 15:55:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.056 15:55:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:53.056 15:55:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.056 15:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:53.056 "name": "raid_bdev1", 00:25:53.056 "uuid": "75a3db33-a772-4b69-acf8-d79e62a1ca0d", 00:25:53.056 "strip_size_kb": 0, 00:25:53.056 "state": "online", 00:25:53.056 "raid_level": "raid1", 00:25:53.056 "superblock": true, 00:25:53.056 "num_base_bdevs": 2, 00:25:53.056 "num_base_bdevs_discovered": 1, 00:25:53.056 "num_base_bdevs_operational": 1, 00:25:53.056 "base_bdevs_list": [ 00:25:53.056 { 00:25:53.056 "name": null, 00:25:53.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:53.056 "is_configured": false, 00:25:53.056 "data_offset": 0, 00:25:53.056 "data_size": 63488 00:25:53.056 }, 00:25:53.056 { 00:25:53.056 "name": "BaseBdev2", 00:25:53.056 "uuid": "4a314429-54b8-5c1f-b99a-ae76d647b091", 00:25:53.056 "is_configured": true, 00:25:53.056 "data_offset": 2048, 00:25:53.056 "data_size": 63488 00:25:53.056 } 00:25:53.056 ] 00:25:53.056 }' 00:25:53.056 15:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:53.056 15:55:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:53.621 15:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:53.621 15:55:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.621 15:55:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:53.621 [2024-11-05 15:55:25.753653] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:53.621 [2024-11-05 15:55:25.753681] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:53.621 [2024-11-05 15:55:25.756011] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:53.621 [2024-11-05 15:55:25.756045] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:53.621 [2024-11-05 15:55:25.756097] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:53.621 [2024-11-05 15:55:25.756104] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:25:53.621 15:55:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.621 15:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62099 00:25:53.621 15:55:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 62099 ']' 00:25:53.621 { 00:25:53.621 "results": [ 00:25:53.621 { 00:25:53.621 "job": "raid_bdev1", 00:25:53.621 "core_mask": "0x1", 00:25:53.621 "workload": "randrw", 00:25:53.621 "percentage": 50, 00:25:53.621 "status": "finished", 00:25:53.621 "queue_depth": 1, 00:25:53.621 "io_size": 131072, 00:25:53.621 "runtime": 1.244275, 00:25:53.621 "iops": 24913.302927407527, 00:25:53.621 "mibps": 3114.162865925941, 00:25:53.621 "io_failed": 0, 00:25:53.621 "io_timeout": 0, 00:25:53.621 "avg_latency_us": 37.662445289798434, 00:25:53.621 "min_latency_us": 22.252307692307692, 00:25:53.621 "max_latency_us": 1342.2276923076922 00:25:53.621 } 00:25:53.621 ], 00:25:53.621 "core_count": 1 00:25:53.621 } 00:25:53.621 15:55:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 62099 00:25:53.621 15:55:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:25:53.621 15:55:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:53.622 15:55:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62099 00:25:53.622 15:55:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:53.622 15:55:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:53.622 killing process with pid 62099 00:25:53.622 15:55:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62099' 00:25:53.622 15:55:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 62099 00:25:53.622 [2024-11-05 15:55:25.789277] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:53.622 15:55:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 62099 00:25:53.622 [2024-11-05 15:55:25.854676] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:54.191 15:55:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.YzNxzMGR3Y 00:25:54.192 15:55:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:25:54.192 15:55:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:25:54.192 15:55:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:25:54.192 15:55:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:25:54.192 15:55:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:54.192 15:55:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:25:54.192 15:55:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:25:54.192 00:25:54.192 real 0m3.277s 00:25:54.192 user 0m3.977s 00:25:54.192 sys 0m0.325s 00:25:54.192 ************************************ 00:25:54.192 END TEST raid_write_error_test 00:25:54.192 ************************************ 00:25:54.192 15:55:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:54.192 15:55:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.192 15:55:26 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:25:54.192 15:55:26 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:25:54.192 15:55:26 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:25:54.192 15:55:26 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:25:54.192 15:55:26 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:54.192 15:55:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:54.192 ************************************ 00:25:54.192 START TEST raid_state_function_test 00:25:54.192 ************************************ 00:25:54.192 15:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 false 00:25:54.192 15:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:25:54.192 15:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:25:54.192 15:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:25:54.192 15:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:25:54.192 15:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:25:54.192 15:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:54.192 15:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:25:54.192 15:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:54.192 15:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:54.192 15:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:25:54.193 15:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:54.193 15:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:54.193 15:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:25:54.193 15:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:54.193 15:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:54.193 15:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:25:54.193 15:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:25:54.193 15:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:25:54.193 15:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:25:54.193 15:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:25:54.193 15:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:25:54.193 15:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:25:54.193 15:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:25:54.193 15:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:25:54.193 15:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:25:54.193 15:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:25:54.193 15:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62232 00:25:54.193 Process raid pid: 62232 00:25:54.193 15:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62232' 00:25:54.193 15:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62232 00:25:54.193 15:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 62232 ']' 00:25:54.193 15:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:54.193 15:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:54.193 15:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.193 15:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:54.193 15:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.193 15:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:54.193 [2024-11-05 15:55:26.544496] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:25:54.193 [2024-11-05 15:55:26.544613] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:54.456 [2024-11-05 15:55:26.701624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.456 [2024-11-05 15:55:26.784829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:54.713 [2024-11-05 15:55:26.894405] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:54.713 [2024-11-05 15:55:26.894437] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:55.279 15:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:55.279 15:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:25:55.279 15:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:55.279 15:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.279 15:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:55.279 [2024-11-05 15:55:27.408455] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:55.279 [2024-11-05 15:55:27.408500] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:55.279 [2024-11-05 15:55:27.408509] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:55.279 [2024-11-05 15:55:27.408516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:55.279 [2024-11-05 15:55:27.408521] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:55.279 [2024-11-05 15:55:27.408528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:55.279 15:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.279 15:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:25:55.279 15:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:55.279 15:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:55.279 15:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:55.279 15:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:55.279 15:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:55.279 15:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:55.279 15:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:55.279 15:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:55.279 15:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:55.279 15:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:55.279 15:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.279 15:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:55.279 15:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:55.279 15:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.279 15:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:55.279 "name": "Existed_Raid", 00:25:55.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.279 "strip_size_kb": 64, 00:25:55.279 "state": "configuring", 00:25:55.279 "raid_level": "raid0", 00:25:55.279 "superblock": false, 00:25:55.279 "num_base_bdevs": 3, 00:25:55.279 "num_base_bdevs_discovered": 0, 00:25:55.279 "num_base_bdevs_operational": 3, 00:25:55.279 "base_bdevs_list": [ 00:25:55.279 { 00:25:55.279 "name": "BaseBdev1", 00:25:55.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.279 "is_configured": false, 00:25:55.279 "data_offset": 0, 00:25:55.279 "data_size": 0 00:25:55.279 }, 00:25:55.279 { 00:25:55.279 "name": "BaseBdev2", 00:25:55.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.279 "is_configured": false, 00:25:55.279 "data_offset": 0, 00:25:55.279 "data_size": 0 00:25:55.279 }, 00:25:55.279 { 00:25:55.279 "name": "BaseBdev3", 00:25:55.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.279 "is_configured": false, 00:25:55.279 "data_offset": 0, 00:25:55.279 "data_size": 0 00:25:55.279 } 00:25:55.279 ] 00:25:55.279 }' 00:25:55.279 15:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:55.279 15:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:55.538 15:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:55.538 15:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.538 15:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:55.538 [2024-11-05 15:55:27.732500] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:55.538 [2024-11-05 15:55:27.732530] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:55.538 15:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.538 15:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:55.538 15:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.538 15:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:55.538 [2024-11-05 15:55:27.740494] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:55.538 [2024-11-05 15:55:27.740529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:55.538 [2024-11-05 15:55:27.740535] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:55.538 [2024-11-05 15:55:27.740542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:55.538 [2024-11-05 15:55:27.740547] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:55.538 [2024-11-05 15:55:27.740554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:55.538 15:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.538 15:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:55.538 15:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.538 15:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:55.538 [2024-11-05 15:55:27.768348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:55.538 BaseBdev1 00:25:55.538 15:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.538 15:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:25:55.538 15:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:25:55.538 15:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:55.538 15:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:25:55.538 15:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:55.538 15:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:55.538 15:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:25:55.538 15:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.538 15:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:55.538 15:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.538 15:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:55.538 15:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.538 15:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:55.538 [ 00:25:55.538 { 00:25:55.538 "name": "BaseBdev1", 00:25:55.538 "aliases": [ 00:25:55.538 "b3a005ee-0ae5-4345-ba3e-613d2dada331" 00:25:55.538 ], 00:25:55.538 "product_name": "Malloc disk", 00:25:55.538 "block_size": 512, 00:25:55.538 "num_blocks": 65536, 00:25:55.538 "uuid": "b3a005ee-0ae5-4345-ba3e-613d2dada331", 00:25:55.538 "assigned_rate_limits": { 00:25:55.538 "rw_ios_per_sec": 0, 00:25:55.538 "rw_mbytes_per_sec": 0, 00:25:55.538 "r_mbytes_per_sec": 0, 00:25:55.538 "w_mbytes_per_sec": 0 00:25:55.538 }, 00:25:55.538 "claimed": true, 00:25:55.538 "claim_type": "exclusive_write", 00:25:55.538 "zoned": false, 00:25:55.538 "supported_io_types": { 00:25:55.538 "read": true, 00:25:55.538 "write": true, 00:25:55.538 "unmap": true, 00:25:55.538 "flush": true, 00:25:55.538 "reset": true, 00:25:55.538 "nvme_admin": false, 00:25:55.538 "nvme_io": false, 00:25:55.538 "nvme_io_md": false, 00:25:55.538 "write_zeroes": true, 00:25:55.538 "zcopy": true, 00:25:55.538 "get_zone_info": false, 00:25:55.538 "zone_management": false, 00:25:55.538 "zone_append": false, 00:25:55.538 "compare": false, 00:25:55.538 "compare_and_write": false, 00:25:55.538 "abort": true, 00:25:55.539 "seek_hole": false, 00:25:55.539 "seek_data": false, 00:25:55.539 "copy": true, 00:25:55.539 "nvme_iov_md": false 00:25:55.539 }, 00:25:55.539 "memory_domains": [ 00:25:55.539 { 00:25:55.539 "dma_device_id": "system", 00:25:55.539 "dma_device_type": 1 00:25:55.539 }, 00:25:55.539 { 00:25:55.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:55.539 "dma_device_type": 2 00:25:55.539 } 00:25:55.539 ], 00:25:55.539 "driver_specific": {} 00:25:55.539 } 00:25:55.539 ] 00:25:55.539 15:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.539 15:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:25:55.539 15:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:25:55.539 15:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:55.539 15:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:55.539 15:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:55.539 15:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:55.539 15:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:55.539 15:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:55.539 15:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:55.539 15:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:55.539 15:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:55.539 15:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:55.539 15:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.539 15:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:55.539 15:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:55.539 15:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.539 15:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:55.539 "name": "Existed_Raid", 00:25:55.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.539 "strip_size_kb": 64, 00:25:55.539 "state": "configuring", 00:25:55.539 "raid_level": "raid0", 00:25:55.539 "superblock": false, 00:25:55.539 "num_base_bdevs": 3, 00:25:55.539 "num_base_bdevs_discovered": 1, 00:25:55.539 "num_base_bdevs_operational": 3, 00:25:55.539 "base_bdevs_list": [ 00:25:55.539 { 00:25:55.539 "name": "BaseBdev1", 00:25:55.539 "uuid": "b3a005ee-0ae5-4345-ba3e-613d2dada331", 00:25:55.539 "is_configured": true, 00:25:55.539 "data_offset": 0, 00:25:55.539 "data_size": 65536 00:25:55.539 }, 00:25:55.539 { 00:25:55.539 "name": "BaseBdev2", 00:25:55.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.539 "is_configured": false, 00:25:55.539 "data_offset": 0, 00:25:55.539 "data_size": 0 00:25:55.539 }, 00:25:55.539 { 00:25:55.539 "name": "BaseBdev3", 00:25:55.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.539 "is_configured": false, 00:25:55.539 "data_offset": 0, 00:25:55.539 "data_size": 0 00:25:55.539 } 00:25:55.539 ] 00:25:55.539 }' 00:25:55.539 15:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:55.539 15:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:55.838 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:55.838 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.838 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:55.838 [2024-11-05 15:55:28.104444] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:55.838 [2024-11-05 15:55:28.104581] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:25:55.838 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.838 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:55.838 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.838 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:55.838 [2024-11-05 15:55:28.112484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:55.838 [2024-11-05 15:55:28.114025] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:55.838 [2024-11-05 15:55:28.114055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:55.838 [2024-11-05 15:55:28.114063] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:55.838 [2024-11-05 15:55:28.114071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:55.838 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.838 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:25:55.838 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:55.838 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:25:55.838 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:55.838 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:55.838 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:55.838 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:55.838 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:55.838 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:55.838 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:55.838 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:55.838 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:55.838 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:55.838 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:55.838 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.838 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:55.838 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.838 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:55.838 "name": "Existed_Raid", 00:25:55.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.838 "strip_size_kb": 64, 00:25:55.838 "state": "configuring", 00:25:55.838 "raid_level": "raid0", 00:25:55.838 "superblock": false, 00:25:55.838 "num_base_bdevs": 3, 00:25:55.838 "num_base_bdevs_discovered": 1, 00:25:55.838 "num_base_bdevs_operational": 3, 00:25:55.838 "base_bdevs_list": [ 00:25:55.838 { 00:25:55.838 "name": "BaseBdev1", 00:25:55.838 "uuid": "b3a005ee-0ae5-4345-ba3e-613d2dada331", 00:25:55.838 "is_configured": true, 00:25:55.838 "data_offset": 0, 00:25:55.838 "data_size": 65536 00:25:55.838 }, 00:25:55.838 { 00:25:55.839 "name": "BaseBdev2", 00:25:55.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.839 "is_configured": false, 00:25:55.839 "data_offset": 0, 00:25:55.839 "data_size": 0 00:25:55.839 }, 00:25:55.839 { 00:25:55.839 "name": "BaseBdev3", 00:25:55.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.839 "is_configured": false, 00:25:55.839 "data_offset": 0, 00:25:55.839 "data_size": 0 00:25:55.839 } 00:25:55.839 ] 00:25:55.839 }' 00:25:55.839 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:55.839 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:56.097 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:56.097 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.097 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:56.097 [2024-11-05 15:55:28.482564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:56.097 BaseBdev2 00:25:56.097 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.097 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:25:56.097 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:25:56.097 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:56.097 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:25:56.097 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:56.097 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:56.097 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:25:56.097 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.097 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:56.097 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.097 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:56.097 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.097 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:56.097 [ 00:25:56.097 { 00:25:56.097 "name": "BaseBdev2", 00:25:56.097 "aliases": [ 00:25:56.097 "17fb075d-d031-4b21-9e1f-6ed5eb6b9058" 00:25:56.097 ], 00:25:56.097 "product_name": "Malloc disk", 00:25:56.097 "block_size": 512, 00:25:56.097 "num_blocks": 65536, 00:25:56.097 "uuid": "17fb075d-d031-4b21-9e1f-6ed5eb6b9058", 00:25:56.097 "assigned_rate_limits": { 00:25:56.097 "rw_ios_per_sec": 0, 00:25:56.097 "rw_mbytes_per_sec": 0, 00:25:56.097 "r_mbytes_per_sec": 0, 00:25:56.097 "w_mbytes_per_sec": 0 00:25:56.097 }, 00:25:56.097 "claimed": true, 00:25:56.097 "claim_type": "exclusive_write", 00:25:56.097 "zoned": false, 00:25:56.097 "supported_io_types": { 00:25:56.098 "read": true, 00:25:56.098 "write": true, 00:25:56.098 "unmap": true, 00:25:56.098 "flush": true, 00:25:56.098 "reset": true, 00:25:56.098 "nvme_admin": false, 00:25:56.098 "nvme_io": false, 00:25:56.098 "nvme_io_md": false, 00:25:56.098 "write_zeroes": true, 00:25:56.098 "zcopy": true, 00:25:56.098 "get_zone_info": false, 00:25:56.098 "zone_management": false, 00:25:56.098 "zone_append": false, 00:25:56.098 "compare": false, 00:25:56.098 "compare_and_write": false, 00:25:56.098 "abort": true, 00:25:56.098 "seek_hole": false, 00:25:56.098 "seek_data": false, 00:25:56.098 "copy": true, 00:25:56.098 "nvme_iov_md": false 00:25:56.098 }, 00:25:56.098 "memory_domains": [ 00:25:56.098 { 00:25:56.098 "dma_device_id": "system", 00:25:56.098 "dma_device_type": 1 00:25:56.098 }, 00:25:56.098 { 00:25:56.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:56.098 "dma_device_type": 2 00:25:56.098 } 00:25:56.098 ], 00:25:56.098 "driver_specific": {} 00:25:56.098 } 00:25:56.098 ] 00:25:56.098 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.098 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:25:56.098 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:56.098 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:56.098 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:25:56.098 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:56.098 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:56.098 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:56.098 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:56.098 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:56.098 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:56.098 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:56.098 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:56.098 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:56.098 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:56.098 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:56.098 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.098 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:56.355 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.355 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:56.355 "name": "Existed_Raid", 00:25:56.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:56.355 "strip_size_kb": 64, 00:25:56.355 "state": "configuring", 00:25:56.355 "raid_level": "raid0", 00:25:56.355 "superblock": false, 00:25:56.355 "num_base_bdevs": 3, 00:25:56.355 "num_base_bdevs_discovered": 2, 00:25:56.355 "num_base_bdevs_operational": 3, 00:25:56.355 "base_bdevs_list": [ 00:25:56.355 { 00:25:56.355 "name": "BaseBdev1", 00:25:56.355 "uuid": "b3a005ee-0ae5-4345-ba3e-613d2dada331", 00:25:56.355 "is_configured": true, 00:25:56.355 "data_offset": 0, 00:25:56.355 "data_size": 65536 00:25:56.355 }, 00:25:56.355 { 00:25:56.355 "name": "BaseBdev2", 00:25:56.355 "uuid": "17fb075d-d031-4b21-9e1f-6ed5eb6b9058", 00:25:56.355 "is_configured": true, 00:25:56.355 "data_offset": 0, 00:25:56.355 "data_size": 65536 00:25:56.355 }, 00:25:56.355 { 00:25:56.355 "name": "BaseBdev3", 00:25:56.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:56.355 "is_configured": false, 00:25:56.355 "data_offset": 0, 00:25:56.355 "data_size": 0 00:25:56.355 } 00:25:56.355 ] 00:25:56.355 }' 00:25:56.355 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:56.355 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:56.614 [2024-11-05 15:55:28.856480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:56.614 [2024-11-05 15:55:28.856630] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:56.614 [2024-11-05 15:55:28.856661] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:25:56.614 [2024-11-05 15:55:28.856940] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:56.614 [2024-11-05 15:55:28.857121] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:56.614 [2024-11-05 15:55:28.857180] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:25:56.614 [2024-11-05 15:55:28.857419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:56.614 BaseBdev3 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:56.614 [ 00:25:56.614 { 00:25:56.614 "name": "BaseBdev3", 00:25:56.614 "aliases": [ 00:25:56.614 "1a540a2c-0a0f-41b0-a494-a3cfc91c9def" 00:25:56.614 ], 00:25:56.614 "product_name": "Malloc disk", 00:25:56.614 "block_size": 512, 00:25:56.614 "num_blocks": 65536, 00:25:56.614 "uuid": "1a540a2c-0a0f-41b0-a494-a3cfc91c9def", 00:25:56.614 "assigned_rate_limits": { 00:25:56.614 "rw_ios_per_sec": 0, 00:25:56.614 "rw_mbytes_per_sec": 0, 00:25:56.614 "r_mbytes_per_sec": 0, 00:25:56.614 "w_mbytes_per_sec": 0 00:25:56.614 }, 00:25:56.614 "claimed": true, 00:25:56.614 "claim_type": "exclusive_write", 00:25:56.614 "zoned": false, 00:25:56.614 "supported_io_types": { 00:25:56.614 "read": true, 00:25:56.614 "write": true, 00:25:56.614 "unmap": true, 00:25:56.614 "flush": true, 00:25:56.614 "reset": true, 00:25:56.614 "nvme_admin": false, 00:25:56.614 "nvme_io": false, 00:25:56.614 "nvme_io_md": false, 00:25:56.614 "write_zeroes": true, 00:25:56.614 "zcopy": true, 00:25:56.614 "get_zone_info": false, 00:25:56.614 "zone_management": false, 00:25:56.614 "zone_append": false, 00:25:56.614 "compare": false, 00:25:56.614 "compare_and_write": false, 00:25:56.614 "abort": true, 00:25:56.614 "seek_hole": false, 00:25:56.614 "seek_data": false, 00:25:56.614 "copy": true, 00:25:56.614 "nvme_iov_md": false 00:25:56.614 }, 00:25:56.614 "memory_domains": [ 00:25:56.614 { 00:25:56.614 "dma_device_id": "system", 00:25:56.614 "dma_device_type": 1 00:25:56.614 }, 00:25:56.614 { 00:25:56.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:56.614 "dma_device_type": 2 00:25:56.614 } 00:25:56.614 ], 00:25:56.614 "driver_specific": {} 00:25:56.614 } 00:25:56.614 ] 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:56.614 "name": "Existed_Raid", 00:25:56.614 "uuid": "47433e72-b4bb-4ac1-923e-c3bd1b39cdd3", 00:25:56.614 "strip_size_kb": 64, 00:25:56.614 "state": "online", 00:25:56.614 "raid_level": "raid0", 00:25:56.614 "superblock": false, 00:25:56.614 "num_base_bdevs": 3, 00:25:56.614 "num_base_bdevs_discovered": 3, 00:25:56.614 "num_base_bdevs_operational": 3, 00:25:56.614 "base_bdevs_list": [ 00:25:56.614 { 00:25:56.614 "name": "BaseBdev1", 00:25:56.614 "uuid": "b3a005ee-0ae5-4345-ba3e-613d2dada331", 00:25:56.614 "is_configured": true, 00:25:56.614 "data_offset": 0, 00:25:56.614 "data_size": 65536 00:25:56.614 }, 00:25:56.614 { 00:25:56.614 "name": "BaseBdev2", 00:25:56.614 "uuid": "17fb075d-d031-4b21-9e1f-6ed5eb6b9058", 00:25:56.614 "is_configured": true, 00:25:56.614 "data_offset": 0, 00:25:56.614 "data_size": 65536 00:25:56.614 }, 00:25:56.614 { 00:25:56.614 "name": "BaseBdev3", 00:25:56.614 "uuid": "1a540a2c-0a0f-41b0-a494-a3cfc91c9def", 00:25:56.614 "is_configured": true, 00:25:56.614 "data_offset": 0, 00:25:56.614 "data_size": 65536 00:25:56.614 } 00:25:56.614 ] 00:25:56.614 }' 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:56.614 15:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:56.873 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:25:56.873 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:56.873 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:56.873 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:56.873 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:56.873 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:56.873 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:56.873 15:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.873 15:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:56.873 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:56.873 [2024-11-05 15:55:29.236864] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:56.873 15:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.873 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:56.873 "name": "Existed_Raid", 00:25:56.873 "aliases": [ 00:25:56.873 "47433e72-b4bb-4ac1-923e-c3bd1b39cdd3" 00:25:56.873 ], 00:25:56.873 "product_name": "Raid Volume", 00:25:56.873 "block_size": 512, 00:25:56.873 "num_blocks": 196608, 00:25:56.873 "uuid": "47433e72-b4bb-4ac1-923e-c3bd1b39cdd3", 00:25:56.873 "assigned_rate_limits": { 00:25:56.873 "rw_ios_per_sec": 0, 00:25:56.873 "rw_mbytes_per_sec": 0, 00:25:56.873 "r_mbytes_per_sec": 0, 00:25:56.873 "w_mbytes_per_sec": 0 00:25:56.873 }, 00:25:56.873 "claimed": false, 00:25:56.873 "zoned": false, 00:25:56.873 "supported_io_types": { 00:25:56.873 "read": true, 00:25:56.873 "write": true, 00:25:56.873 "unmap": true, 00:25:56.873 "flush": true, 00:25:56.873 "reset": true, 00:25:56.873 "nvme_admin": false, 00:25:56.873 "nvme_io": false, 00:25:56.873 "nvme_io_md": false, 00:25:56.873 "write_zeroes": true, 00:25:56.873 "zcopy": false, 00:25:56.873 "get_zone_info": false, 00:25:56.873 "zone_management": false, 00:25:56.873 "zone_append": false, 00:25:56.873 "compare": false, 00:25:56.873 "compare_and_write": false, 00:25:56.873 "abort": false, 00:25:56.873 "seek_hole": false, 00:25:56.873 "seek_data": false, 00:25:56.873 "copy": false, 00:25:56.873 "nvme_iov_md": false 00:25:56.873 }, 00:25:56.873 "memory_domains": [ 00:25:56.873 { 00:25:56.873 "dma_device_id": "system", 00:25:56.873 "dma_device_type": 1 00:25:56.873 }, 00:25:56.873 { 00:25:56.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:56.873 "dma_device_type": 2 00:25:56.873 }, 00:25:56.873 { 00:25:56.873 "dma_device_id": "system", 00:25:56.873 "dma_device_type": 1 00:25:56.873 }, 00:25:56.873 { 00:25:56.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:56.873 "dma_device_type": 2 00:25:56.873 }, 00:25:56.873 { 00:25:56.873 "dma_device_id": "system", 00:25:56.873 "dma_device_type": 1 00:25:56.873 }, 00:25:56.873 { 00:25:56.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:56.873 "dma_device_type": 2 00:25:56.873 } 00:25:56.873 ], 00:25:56.873 "driver_specific": { 00:25:56.873 "raid": { 00:25:56.873 "uuid": "47433e72-b4bb-4ac1-923e-c3bd1b39cdd3", 00:25:56.873 "strip_size_kb": 64, 00:25:56.873 "state": "online", 00:25:56.873 "raid_level": "raid0", 00:25:56.873 "superblock": false, 00:25:56.873 "num_base_bdevs": 3, 00:25:56.873 "num_base_bdevs_discovered": 3, 00:25:56.873 "num_base_bdevs_operational": 3, 00:25:56.873 "base_bdevs_list": [ 00:25:56.873 { 00:25:56.873 "name": "BaseBdev1", 00:25:56.873 "uuid": "b3a005ee-0ae5-4345-ba3e-613d2dada331", 00:25:56.873 "is_configured": true, 00:25:56.873 "data_offset": 0, 00:25:56.873 "data_size": 65536 00:25:56.873 }, 00:25:56.873 { 00:25:56.873 "name": "BaseBdev2", 00:25:56.873 "uuid": "17fb075d-d031-4b21-9e1f-6ed5eb6b9058", 00:25:56.873 "is_configured": true, 00:25:56.873 "data_offset": 0, 00:25:56.873 "data_size": 65536 00:25:56.873 }, 00:25:56.873 { 00:25:56.873 "name": "BaseBdev3", 00:25:56.873 "uuid": "1a540a2c-0a0f-41b0-a494-a3cfc91c9def", 00:25:56.874 "is_configured": true, 00:25:56.874 "data_offset": 0, 00:25:56.874 "data_size": 65536 00:25:56.874 } 00:25:56.874 ] 00:25:56.874 } 00:25:56.874 } 00:25:56.874 }' 00:25:56.874 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:25:57.132 BaseBdev2 00:25:57.132 BaseBdev3' 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.132 [2024-11-05 15:55:29.432650] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:57.132 [2024-11-05 15:55:29.432745] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:57.132 [2024-11-05 15:55:29.432794] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:57.132 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:57.133 15:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.133 15:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.133 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:57.133 15:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.133 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:57.133 "name": "Existed_Raid", 00:25:57.133 "uuid": "47433e72-b4bb-4ac1-923e-c3bd1b39cdd3", 00:25:57.133 "strip_size_kb": 64, 00:25:57.133 "state": "offline", 00:25:57.133 "raid_level": "raid0", 00:25:57.133 "superblock": false, 00:25:57.133 "num_base_bdevs": 3, 00:25:57.133 "num_base_bdevs_discovered": 2, 00:25:57.133 "num_base_bdevs_operational": 2, 00:25:57.133 "base_bdevs_list": [ 00:25:57.133 { 00:25:57.133 "name": null, 00:25:57.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.133 "is_configured": false, 00:25:57.133 "data_offset": 0, 00:25:57.133 "data_size": 65536 00:25:57.133 }, 00:25:57.133 { 00:25:57.133 "name": "BaseBdev2", 00:25:57.133 "uuid": "17fb075d-d031-4b21-9e1f-6ed5eb6b9058", 00:25:57.133 "is_configured": true, 00:25:57.133 "data_offset": 0, 00:25:57.133 "data_size": 65536 00:25:57.133 }, 00:25:57.133 { 00:25:57.133 "name": "BaseBdev3", 00:25:57.133 "uuid": "1a540a2c-0a0f-41b0-a494-a3cfc91c9def", 00:25:57.133 "is_configured": true, 00:25:57.133 "data_offset": 0, 00:25:57.133 "data_size": 65536 00:25:57.133 } 00:25:57.133 ] 00:25:57.133 }' 00:25:57.133 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:57.133 15:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.391 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:25:57.391 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:57.391 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:57.391 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:57.391 15:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.391 15:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.391 15:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.651 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:57.651 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:57.651 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:25:57.651 15:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.651 15:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.651 [2024-11-05 15:55:29.818917] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:57.651 15:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.651 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:57.651 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:57.651 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:57.651 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:57.651 15:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.651 15:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.651 15:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.651 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:57.651 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:57.651 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:25:57.651 15:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.651 15:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.651 [2024-11-05 15:55:29.908464] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:57.651 [2024-11-05 15:55:29.908504] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:25:57.651 15:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.651 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:57.651 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:57.651 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:57.651 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:25:57.651 15:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.651 15:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.651 15:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.651 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:25:57.651 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:25:57.651 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:25:57.651 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:25:57.651 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:57.651 15:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:57.651 15:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.651 15:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.651 BaseBdev2 00:25:57.651 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.651 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:25:57.651 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:25:57.651 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:57.651 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:25:57.651 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:57.651 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:57.651 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:25:57.651 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.651 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.651 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.651 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:57.651 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.651 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.651 [ 00:25:57.651 { 00:25:57.651 "name": "BaseBdev2", 00:25:57.651 "aliases": [ 00:25:57.651 "8f5af751-58b6-4a03-a601-e38d7138d140" 00:25:57.651 ], 00:25:57.651 "product_name": "Malloc disk", 00:25:57.651 "block_size": 512, 00:25:57.651 "num_blocks": 65536, 00:25:57.651 "uuid": "8f5af751-58b6-4a03-a601-e38d7138d140", 00:25:57.651 "assigned_rate_limits": { 00:25:57.651 "rw_ios_per_sec": 0, 00:25:57.651 "rw_mbytes_per_sec": 0, 00:25:57.651 "r_mbytes_per_sec": 0, 00:25:57.651 "w_mbytes_per_sec": 0 00:25:57.651 }, 00:25:57.651 "claimed": false, 00:25:57.651 "zoned": false, 00:25:57.651 "supported_io_types": { 00:25:57.651 "read": true, 00:25:57.651 "write": true, 00:25:57.651 "unmap": true, 00:25:57.651 "flush": true, 00:25:57.651 "reset": true, 00:25:57.651 "nvme_admin": false, 00:25:57.651 "nvme_io": false, 00:25:57.651 "nvme_io_md": false, 00:25:57.651 "write_zeroes": true, 00:25:57.651 "zcopy": true, 00:25:57.651 "get_zone_info": false, 00:25:57.651 "zone_management": false, 00:25:57.651 "zone_append": false, 00:25:57.651 "compare": false, 00:25:57.651 "compare_and_write": false, 00:25:57.651 "abort": true, 00:25:57.651 "seek_hole": false, 00:25:57.651 "seek_data": false, 00:25:57.651 "copy": true, 00:25:57.651 "nvme_iov_md": false 00:25:57.651 }, 00:25:57.651 "memory_domains": [ 00:25:57.651 { 00:25:57.651 "dma_device_id": "system", 00:25:57.651 "dma_device_type": 1 00:25:57.651 }, 00:25:57.651 { 00:25:57.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:57.651 "dma_device_type": 2 00:25:57.651 } 00:25:57.651 ], 00:25:57.651 "driver_specific": {} 00:25:57.651 } 00:25:57.651 ] 00:25:57.651 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.651 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:25:57.651 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:57.651 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:57.651 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:57.651 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.651 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.651 BaseBdev3 00:25:57.651 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.910 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:25:57.910 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:25:57.910 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:57.910 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:25:57.910 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:57.910 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:57.910 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:25:57.910 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.910 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.910 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.910 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:57.910 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.910 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.910 [ 00:25:57.910 { 00:25:57.910 "name": "BaseBdev3", 00:25:57.910 "aliases": [ 00:25:57.910 "865eea5b-3724-41b5-9949-49a9bb25bae2" 00:25:57.910 ], 00:25:57.910 "product_name": "Malloc disk", 00:25:57.910 "block_size": 512, 00:25:57.910 "num_blocks": 65536, 00:25:57.910 "uuid": "865eea5b-3724-41b5-9949-49a9bb25bae2", 00:25:57.910 "assigned_rate_limits": { 00:25:57.910 "rw_ios_per_sec": 0, 00:25:57.910 "rw_mbytes_per_sec": 0, 00:25:57.910 "r_mbytes_per_sec": 0, 00:25:57.910 "w_mbytes_per_sec": 0 00:25:57.910 }, 00:25:57.910 "claimed": false, 00:25:57.910 "zoned": false, 00:25:57.910 "supported_io_types": { 00:25:57.910 "read": true, 00:25:57.910 "write": true, 00:25:57.910 "unmap": true, 00:25:57.910 "flush": true, 00:25:57.910 "reset": true, 00:25:57.910 "nvme_admin": false, 00:25:57.910 "nvme_io": false, 00:25:57.910 "nvme_io_md": false, 00:25:57.910 "write_zeroes": true, 00:25:57.910 "zcopy": true, 00:25:57.910 "get_zone_info": false, 00:25:57.910 "zone_management": false, 00:25:57.910 "zone_append": false, 00:25:57.910 "compare": false, 00:25:57.910 "compare_and_write": false, 00:25:57.910 "abort": true, 00:25:57.910 "seek_hole": false, 00:25:57.910 "seek_data": false, 00:25:57.910 "copy": true, 00:25:57.910 "nvme_iov_md": false 00:25:57.910 }, 00:25:57.910 "memory_domains": [ 00:25:57.910 { 00:25:57.910 "dma_device_id": "system", 00:25:57.910 "dma_device_type": 1 00:25:57.910 }, 00:25:57.910 { 00:25:57.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:57.910 "dma_device_type": 2 00:25:57.910 } 00:25:57.910 ], 00:25:57.910 "driver_specific": {} 00:25:57.910 } 00:25:57.910 ] 00:25:57.910 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.910 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:25:57.910 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:57.910 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:57.910 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:57.910 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.910 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.910 [2024-11-05 15:55:30.093072] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:57.911 [2024-11-05 15:55:30.093192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:57.911 [2024-11-05 15:55:30.093251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:57.911 [2024-11-05 15:55:30.094791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:57.911 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.911 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:25:57.911 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:57.911 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:57.911 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:57.911 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:57.911 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:57.911 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:57.911 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:57.911 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:57.911 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:57.911 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:57.911 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:57.911 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.911 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.911 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.911 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:57.911 "name": "Existed_Raid", 00:25:57.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.911 "strip_size_kb": 64, 00:25:57.911 "state": "configuring", 00:25:57.911 "raid_level": "raid0", 00:25:57.911 "superblock": false, 00:25:57.911 "num_base_bdevs": 3, 00:25:57.911 "num_base_bdevs_discovered": 2, 00:25:57.911 "num_base_bdevs_operational": 3, 00:25:57.911 "base_bdevs_list": [ 00:25:57.911 { 00:25:57.911 "name": "BaseBdev1", 00:25:57.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.911 "is_configured": false, 00:25:57.911 "data_offset": 0, 00:25:57.911 "data_size": 0 00:25:57.911 }, 00:25:57.911 { 00:25:57.911 "name": "BaseBdev2", 00:25:57.911 "uuid": "8f5af751-58b6-4a03-a601-e38d7138d140", 00:25:57.911 "is_configured": true, 00:25:57.911 "data_offset": 0, 00:25:57.911 "data_size": 65536 00:25:57.911 }, 00:25:57.911 { 00:25:57.911 "name": "BaseBdev3", 00:25:57.911 "uuid": "865eea5b-3724-41b5-9949-49a9bb25bae2", 00:25:57.911 "is_configured": true, 00:25:57.911 "data_offset": 0, 00:25:57.911 "data_size": 65536 00:25:57.911 } 00:25:57.911 ] 00:25:57.911 }' 00:25:57.911 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:57.911 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.169 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:25:58.169 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.169 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.169 [2024-11-05 15:55:30.421138] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:58.169 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.169 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:25:58.169 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:58.169 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:58.169 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:58.169 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:58.169 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:58.169 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:58.169 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:58.169 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:58.169 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:58.169 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:58.169 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.169 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:58.169 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.169 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.169 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:58.169 "name": "Existed_Raid", 00:25:58.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:58.169 "strip_size_kb": 64, 00:25:58.169 "state": "configuring", 00:25:58.169 "raid_level": "raid0", 00:25:58.169 "superblock": false, 00:25:58.169 "num_base_bdevs": 3, 00:25:58.169 "num_base_bdevs_discovered": 1, 00:25:58.169 "num_base_bdevs_operational": 3, 00:25:58.169 "base_bdevs_list": [ 00:25:58.169 { 00:25:58.169 "name": "BaseBdev1", 00:25:58.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:58.169 "is_configured": false, 00:25:58.169 "data_offset": 0, 00:25:58.169 "data_size": 0 00:25:58.169 }, 00:25:58.169 { 00:25:58.169 "name": null, 00:25:58.169 "uuid": "8f5af751-58b6-4a03-a601-e38d7138d140", 00:25:58.169 "is_configured": false, 00:25:58.169 "data_offset": 0, 00:25:58.169 "data_size": 65536 00:25:58.169 }, 00:25:58.169 { 00:25:58.169 "name": "BaseBdev3", 00:25:58.169 "uuid": "865eea5b-3724-41b5-9949-49a9bb25bae2", 00:25:58.169 "is_configured": true, 00:25:58.169 "data_offset": 0, 00:25:58.169 "data_size": 65536 00:25:58.169 } 00:25:58.169 ] 00:25:58.169 }' 00:25:58.169 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:58.169 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.428 [2024-11-05 15:55:30.811308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:58.428 BaseBdev1 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.428 [ 00:25:58.428 { 00:25:58.428 "name": "BaseBdev1", 00:25:58.428 "aliases": [ 00:25:58.428 "43d3920e-9f5c-4043-b7fb-2f0286257ba2" 00:25:58.428 ], 00:25:58.428 "product_name": "Malloc disk", 00:25:58.428 "block_size": 512, 00:25:58.428 "num_blocks": 65536, 00:25:58.428 "uuid": "43d3920e-9f5c-4043-b7fb-2f0286257ba2", 00:25:58.428 "assigned_rate_limits": { 00:25:58.428 "rw_ios_per_sec": 0, 00:25:58.428 "rw_mbytes_per_sec": 0, 00:25:58.428 "r_mbytes_per_sec": 0, 00:25:58.428 "w_mbytes_per_sec": 0 00:25:58.428 }, 00:25:58.428 "claimed": true, 00:25:58.428 "claim_type": "exclusive_write", 00:25:58.428 "zoned": false, 00:25:58.428 "supported_io_types": { 00:25:58.428 "read": true, 00:25:58.428 "write": true, 00:25:58.428 "unmap": true, 00:25:58.428 "flush": true, 00:25:58.428 "reset": true, 00:25:58.428 "nvme_admin": false, 00:25:58.428 "nvme_io": false, 00:25:58.428 "nvme_io_md": false, 00:25:58.428 "write_zeroes": true, 00:25:58.428 "zcopy": true, 00:25:58.428 "get_zone_info": false, 00:25:58.428 "zone_management": false, 00:25:58.428 "zone_append": false, 00:25:58.428 "compare": false, 00:25:58.428 "compare_and_write": false, 00:25:58.428 "abort": true, 00:25:58.428 "seek_hole": false, 00:25:58.428 "seek_data": false, 00:25:58.428 "copy": true, 00:25:58.428 "nvme_iov_md": false 00:25:58.428 }, 00:25:58.428 "memory_domains": [ 00:25:58.428 { 00:25:58.428 "dma_device_id": "system", 00:25:58.428 "dma_device_type": 1 00:25:58.428 }, 00:25:58.428 { 00:25:58.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:58.428 "dma_device_type": 2 00:25:58.428 } 00:25:58.428 ], 00:25:58.428 "driver_specific": {} 00:25:58.428 } 00:25:58.428 ] 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.428 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.687 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.687 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:58.687 "name": "Existed_Raid", 00:25:58.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:58.687 "strip_size_kb": 64, 00:25:58.687 "state": "configuring", 00:25:58.687 "raid_level": "raid0", 00:25:58.687 "superblock": false, 00:25:58.687 "num_base_bdevs": 3, 00:25:58.687 "num_base_bdevs_discovered": 2, 00:25:58.687 "num_base_bdevs_operational": 3, 00:25:58.687 "base_bdevs_list": [ 00:25:58.687 { 00:25:58.687 "name": "BaseBdev1", 00:25:58.687 "uuid": "43d3920e-9f5c-4043-b7fb-2f0286257ba2", 00:25:58.687 "is_configured": true, 00:25:58.687 "data_offset": 0, 00:25:58.687 "data_size": 65536 00:25:58.687 }, 00:25:58.687 { 00:25:58.687 "name": null, 00:25:58.687 "uuid": "8f5af751-58b6-4a03-a601-e38d7138d140", 00:25:58.687 "is_configured": false, 00:25:58.687 "data_offset": 0, 00:25:58.687 "data_size": 65536 00:25:58.687 }, 00:25:58.687 { 00:25:58.687 "name": "BaseBdev3", 00:25:58.687 "uuid": "865eea5b-3724-41b5-9949-49a9bb25bae2", 00:25:58.687 "is_configured": true, 00:25:58.687 "data_offset": 0, 00:25:58.687 "data_size": 65536 00:25:58.687 } 00:25:58.687 ] 00:25:58.687 }' 00:25:58.687 15:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:58.687 15:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.945 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:58.945 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:58.945 15:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.945 15:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.945 15:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.945 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:25:58.945 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:25:58.945 15:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.945 15:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.945 [2024-11-05 15:55:31.159407] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:58.945 15:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.945 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:25:58.945 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:58.945 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:58.945 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:58.945 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:58.945 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:58.945 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:58.945 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:58.945 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:58.945 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:58.945 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:58.945 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:58.945 15:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.945 15:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.945 15:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.945 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:58.945 "name": "Existed_Raid", 00:25:58.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:58.945 "strip_size_kb": 64, 00:25:58.945 "state": "configuring", 00:25:58.945 "raid_level": "raid0", 00:25:58.945 "superblock": false, 00:25:58.945 "num_base_bdevs": 3, 00:25:58.945 "num_base_bdevs_discovered": 1, 00:25:58.945 "num_base_bdevs_operational": 3, 00:25:58.945 "base_bdevs_list": [ 00:25:58.945 { 00:25:58.945 "name": "BaseBdev1", 00:25:58.945 "uuid": "43d3920e-9f5c-4043-b7fb-2f0286257ba2", 00:25:58.945 "is_configured": true, 00:25:58.945 "data_offset": 0, 00:25:58.945 "data_size": 65536 00:25:58.945 }, 00:25:58.945 { 00:25:58.945 "name": null, 00:25:58.945 "uuid": "8f5af751-58b6-4a03-a601-e38d7138d140", 00:25:58.945 "is_configured": false, 00:25:58.945 "data_offset": 0, 00:25:58.945 "data_size": 65536 00:25:58.945 }, 00:25:58.945 { 00:25:58.945 "name": null, 00:25:58.945 "uuid": "865eea5b-3724-41b5-9949-49a9bb25bae2", 00:25:58.945 "is_configured": false, 00:25:58.945 "data_offset": 0, 00:25:58.945 "data_size": 65536 00:25:58.945 } 00:25:58.945 ] 00:25:58.945 }' 00:25:58.945 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:58.946 15:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.204 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:59.204 15:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.204 15:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.204 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:59.204 15:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.204 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:25:59.204 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:59.204 15:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.204 15:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.204 [2024-11-05 15:55:31.511513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:59.204 15:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.204 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:25:59.205 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:59.205 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:59.205 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:59.205 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:59.205 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:59.205 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:59.205 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:59.205 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:59.205 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:59.205 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:59.205 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:59.205 15:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.205 15:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.205 15:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.205 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:59.205 "name": "Existed_Raid", 00:25:59.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:59.205 "strip_size_kb": 64, 00:25:59.205 "state": "configuring", 00:25:59.205 "raid_level": "raid0", 00:25:59.205 "superblock": false, 00:25:59.205 "num_base_bdevs": 3, 00:25:59.205 "num_base_bdevs_discovered": 2, 00:25:59.205 "num_base_bdevs_operational": 3, 00:25:59.205 "base_bdevs_list": [ 00:25:59.205 { 00:25:59.205 "name": "BaseBdev1", 00:25:59.205 "uuid": "43d3920e-9f5c-4043-b7fb-2f0286257ba2", 00:25:59.205 "is_configured": true, 00:25:59.205 "data_offset": 0, 00:25:59.205 "data_size": 65536 00:25:59.205 }, 00:25:59.205 { 00:25:59.205 "name": null, 00:25:59.205 "uuid": "8f5af751-58b6-4a03-a601-e38d7138d140", 00:25:59.205 "is_configured": false, 00:25:59.205 "data_offset": 0, 00:25:59.205 "data_size": 65536 00:25:59.205 }, 00:25:59.205 { 00:25:59.205 "name": "BaseBdev3", 00:25:59.205 "uuid": "865eea5b-3724-41b5-9949-49a9bb25bae2", 00:25:59.205 "is_configured": true, 00:25:59.205 "data_offset": 0, 00:25:59.205 "data_size": 65536 00:25:59.205 } 00:25:59.205 ] 00:25:59.205 }' 00:25:59.205 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:59.205 15:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.463 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:59.463 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:59.463 15:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.463 15:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.463 15:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.463 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:25:59.463 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:59.463 15:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.463 15:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.463 [2024-11-05 15:55:31.863598] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:59.745 15:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.745 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:25:59.745 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:59.745 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:59.745 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:59.745 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:59.745 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:59.745 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:59.745 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:59.745 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:59.745 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:59.745 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:59.745 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:59.745 15:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.745 15:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.745 15:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.745 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:59.745 "name": "Existed_Raid", 00:25:59.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:59.745 "strip_size_kb": 64, 00:25:59.745 "state": "configuring", 00:25:59.745 "raid_level": "raid0", 00:25:59.745 "superblock": false, 00:25:59.745 "num_base_bdevs": 3, 00:25:59.745 "num_base_bdevs_discovered": 1, 00:25:59.745 "num_base_bdevs_operational": 3, 00:25:59.745 "base_bdevs_list": [ 00:25:59.745 { 00:25:59.745 "name": null, 00:25:59.745 "uuid": "43d3920e-9f5c-4043-b7fb-2f0286257ba2", 00:25:59.745 "is_configured": false, 00:25:59.745 "data_offset": 0, 00:25:59.745 "data_size": 65536 00:25:59.745 }, 00:25:59.745 { 00:25:59.745 "name": null, 00:25:59.745 "uuid": "8f5af751-58b6-4a03-a601-e38d7138d140", 00:25:59.745 "is_configured": false, 00:25:59.745 "data_offset": 0, 00:25:59.745 "data_size": 65536 00:25:59.745 }, 00:25:59.745 { 00:25:59.745 "name": "BaseBdev3", 00:25:59.745 "uuid": "865eea5b-3724-41b5-9949-49a9bb25bae2", 00:25:59.745 "is_configured": true, 00:25:59.745 "data_offset": 0, 00:25:59.745 "data_size": 65536 00:25:59.745 } 00:25:59.745 ] 00:25:59.745 }' 00:25:59.745 15:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:59.745 15:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.003 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:00.003 15:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.003 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:00.003 15:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.003 15:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.003 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:26:00.003 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:00.003 15:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.003 15:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.003 [2024-11-05 15:55:32.273672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:00.003 15:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.003 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:00.003 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:00.003 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:00.003 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:00.003 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:00.003 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:00.003 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:00.003 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:00.003 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:00.003 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:00.003 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:00.003 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:00.003 15:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.003 15:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.003 15:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.003 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:00.003 "name": "Existed_Raid", 00:26:00.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:00.003 "strip_size_kb": 64, 00:26:00.003 "state": "configuring", 00:26:00.003 "raid_level": "raid0", 00:26:00.004 "superblock": false, 00:26:00.004 "num_base_bdevs": 3, 00:26:00.004 "num_base_bdevs_discovered": 2, 00:26:00.004 "num_base_bdevs_operational": 3, 00:26:00.004 "base_bdevs_list": [ 00:26:00.004 { 00:26:00.004 "name": null, 00:26:00.004 "uuid": "43d3920e-9f5c-4043-b7fb-2f0286257ba2", 00:26:00.004 "is_configured": false, 00:26:00.004 "data_offset": 0, 00:26:00.004 "data_size": 65536 00:26:00.004 }, 00:26:00.004 { 00:26:00.004 "name": "BaseBdev2", 00:26:00.004 "uuid": "8f5af751-58b6-4a03-a601-e38d7138d140", 00:26:00.004 "is_configured": true, 00:26:00.004 "data_offset": 0, 00:26:00.004 "data_size": 65536 00:26:00.004 }, 00:26:00.004 { 00:26:00.004 "name": "BaseBdev3", 00:26:00.004 "uuid": "865eea5b-3724-41b5-9949-49a9bb25bae2", 00:26:00.004 "is_configured": true, 00:26:00.004 "data_offset": 0, 00:26:00.004 "data_size": 65536 00:26:00.004 } 00:26:00.004 ] 00:26:00.004 }' 00:26:00.004 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:00.004 15:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.262 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:00.262 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:00.262 15:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.262 15:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.262 15:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.262 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:26:00.262 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:00.262 15:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.262 15:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.262 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:00.262 15:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.262 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 43d3920e-9f5c-4043-b7fb-2f0286257ba2 00:26:00.262 15:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.262 15:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.262 [2024-11-05 15:55:32.667574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:00.262 [2024-11-05 15:55:32.667602] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:26:00.262 [2024-11-05 15:55:32.667609] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:26:00.262 [2024-11-05 15:55:32.667800] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:26:00.262 [2024-11-05 15:55:32.667919] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:26:00.262 [2024-11-05 15:55:32.667926] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:26:00.262 NewBaseBdev 00:26:00.262 [2024-11-05 15:55:32.668096] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:00.262 15:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.262 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:26:00.262 15:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:26:00.262 15:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:00.262 15:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:26:00.262 15:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:00.262 15:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:00.262 15:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:00.262 15:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.262 15:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.521 15:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.521 15:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:00.521 15:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.521 15:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.521 [ 00:26:00.521 { 00:26:00.521 "name": "NewBaseBdev", 00:26:00.521 "aliases": [ 00:26:00.521 "43d3920e-9f5c-4043-b7fb-2f0286257ba2" 00:26:00.521 ], 00:26:00.521 "product_name": "Malloc disk", 00:26:00.521 "block_size": 512, 00:26:00.521 "num_blocks": 65536, 00:26:00.521 "uuid": "43d3920e-9f5c-4043-b7fb-2f0286257ba2", 00:26:00.521 "assigned_rate_limits": { 00:26:00.521 "rw_ios_per_sec": 0, 00:26:00.521 "rw_mbytes_per_sec": 0, 00:26:00.521 "r_mbytes_per_sec": 0, 00:26:00.521 "w_mbytes_per_sec": 0 00:26:00.521 }, 00:26:00.521 "claimed": true, 00:26:00.521 "claim_type": "exclusive_write", 00:26:00.521 "zoned": false, 00:26:00.521 "supported_io_types": { 00:26:00.521 "read": true, 00:26:00.521 "write": true, 00:26:00.521 "unmap": true, 00:26:00.521 "flush": true, 00:26:00.521 "reset": true, 00:26:00.521 "nvme_admin": false, 00:26:00.521 "nvme_io": false, 00:26:00.521 "nvme_io_md": false, 00:26:00.521 "write_zeroes": true, 00:26:00.521 "zcopy": true, 00:26:00.521 "get_zone_info": false, 00:26:00.521 "zone_management": false, 00:26:00.521 "zone_append": false, 00:26:00.521 "compare": false, 00:26:00.521 "compare_and_write": false, 00:26:00.521 "abort": true, 00:26:00.521 "seek_hole": false, 00:26:00.521 "seek_data": false, 00:26:00.521 "copy": true, 00:26:00.521 "nvme_iov_md": false 00:26:00.521 }, 00:26:00.521 "memory_domains": [ 00:26:00.521 { 00:26:00.521 "dma_device_id": "system", 00:26:00.521 "dma_device_type": 1 00:26:00.521 }, 00:26:00.521 { 00:26:00.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:00.521 "dma_device_type": 2 00:26:00.521 } 00:26:00.521 ], 00:26:00.521 "driver_specific": {} 00:26:00.521 } 00:26:00.521 ] 00:26:00.521 15:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.521 15:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:26:00.521 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:26:00.521 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:00.521 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:00.521 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:00.521 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:00.521 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:00.521 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:00.521 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:00.521 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:00.521 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:00.521 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:00.521 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:00.521 15:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.521 15:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.521 15:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.521 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:00.521 "name": "Existed_Raid", 00:26:00.521 "uuid": "9ab83422-d71c-4b48-abdb-21de15c3826e", 00:26:00.521 "strip_size_kb": 64, 00:26:00.521 "state": "online", 00:26:00.521 "raid_level": "raid0", 00:26:00.521 "superblock": false, 00:26:00.521 "num_base_bdevs": 3, 00:26:00.521 "num_base_bdevs_discovered": 3, 00:26:00.521 "num_base_bdevs_operational": 3, 00:26:00.521 "base_bdevs_list": [ 00:26:00.521 { 00:26:00.521 "name": "NewBaseBdev", 00:26:00.521 "uuid": "43d3920e-9f5c-4043-b7fb-2f0286257ba2", 00:26:00.521 "is_configured": true, 00:26:00.521 "data_offset": 0, 00:26:00.521 "data_size": 65536 00:26:00.521 }, 00:26:00.521 { 00:26:00.521 "name": "BaseBdev2", 00:26:00.521 "uuid": "8f5af751-58b6-4a03-a601-e38d7138d140", 00:26:00.521 "is_configured": true, 00:26:00.521 "data_offset": 0, 00:26:00.521 "data_size": 65536 00:26:00.521 }, 00:26:00.521 { 00:26:00.521 "name": "BaseBdev3", 00:26:00.521 "uuid": "865eea5b-3724-41b5-9949-49a9bb25bae2", 00:26:00.521 "is_configured": true, 00:26:00.521 "data_offset": 0, 00:26:00.521 "data_size": 65536 00:26:00.521 } 00:26:00.521 ] 00:26:00.521 }' 00:26:00.521 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:00.521 15:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.780 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:26:00.780 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:00.780 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:00.780 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:00.780 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:00.780 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:00.780 15:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:00.780 15:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:00.780 15:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.780 15:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.780 [2024-11-05 15:55:33.007950] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:00.780 15:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.780 15:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:00.780 "name": "Existed_Raid", 00:26:00.780 "aliases": [ 00:26:00.780 "9ab83422-d71c-4b48-abdb-21de15c3826e" 00:26:00.780 ], 00:26:00.780 "product_name": "Raid Volume", 00:26:00.780 "block_size": 512, 00:26:00.780 "num_blocks": 196608, 00:26:00.780 "uuid": "9ab83422-d71c-4b48-abdb-21de15c3826e", 00:26:00.780 "assigned_rate_limits": { 00:26:00.780 "rw_ios_per_sec": 0, 00:26:00.780 "rw_mbytes_per_sec": 0, 00:26:00.780 "r_mbytes_per_sec": 0, 00:26:00.780 "w_mbytes_per_sec": 0 00:26:00.780 }, 00:26:00.780 "claimed": false, 00:26:00.780 "zoned": false, 00:26:00.780 "supported_io_types": { 00:26:00.780 "read": true, 00:26:00.780 "write": true, 00:26:00.780 "unmap": true, 00:26:00.780 "flush": true, 00:26:00.780 "reset": true, 00:26:00.780 "nvme_admin": false, 00:26:00.780 "nvme_io": false, 00:26:00.780 "nvme_io_md": false, 00:26:00.780 "write_zeroes": true, 00:26:00.780 "zcopy": false, 00:26:00.780 "get_zone_info": false, 00:26:00.780 "zone_management": false, 00:26:00.780 "zone_append": false, 00:26:00.780 "compare": false, 00:26:00.780 "compare_and_write": false, 00:26:00.780 "abort": false, 00:26:00.780 "seek_hole": false, 00:26:00.780 "seek_data": false, 00:26:00.780 "copy": false, 00:26:00.780 "nvme_iov_md": false 00:26:00.780 }, 00:26:00.780 "memory_domains": [ 00:26:00.780 { 00:26:00.780 "dma_device_id": "system", 00:26:00.780 "dma_device_type": 1 00:26:00.780 }, 00:26:00.780 { 00:26:00.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:00.780 "dma_device_type": 2 00:26:00.780 }, 00:26:00.780 { 00:26:00.780 "dma_device_id": "system", 00:26:00.780 "dma_device_type": 1 00:26:00.780 }, 00:26:00.780 { 00:26:00.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:00.780 "dma_device_type": 2 00:26:00.780 }, 00:26:00.780 { 00:26:00.780 "dma_device_id": "system", 00:26:00.780 "dma_device_type": 1 00:26:00.780 }, 00:26:00.780 { 00:26:00.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:00.780 "dma_device_type": 2 00:26:00.780 } 00:26:00.780 ], 00:26:00.780 "driver_specific": { 00:26:00.780 "raid": { 00:26:00.780 "uuid": "9ab83422-d71c-4b48-abdb-21de15c3826e", 00:26:00.780 "strip_size_kb": 64, 00:26:00.780 "state": "online", 00:26:00.780 "raid_level": "raid0", 00:26:00.780 "superblock": false, 00:26:00.780 "num_base_bdevs": 3, 00:26:00.780 "num_base_bdevs_discovered": 3, 00:26:00.780 "num_base_bdevs_operational": 3, 00:26:00.780 "base_bdevs_list": [ 00:26:00.780 { 00:26:00.780 "name": "NewBaseBdev", 00:26:00.780 "uuid": "43d3920e-9f5c-4043-b7fb-2f0286257ba2", 00:26:00.780 "is_configured": true, 00:26:00.780 "data_offset": 0, 00:26:00.780 "data_size": 65536 00:26:00.780 }, 00:26:00.780 { 00:26:00.780 "name": "BaseBdev2", 00:26:00.780 "uuid": "8f5af751-58b6-4a03-a601-e38d7138d140", 00:26:00.780 "is_configured": true, 00:26:00.780 "data_offset": 0, 00:26:00.780 "data_size": 65536 00:26:00.780 }, 00:26:00.780 { 00:26:00.780 "name": "BaseBdev3", 00:26:00.780 "uuid": "865eea5b-3724-41b5-9949-49a9bb25bae2", 00:26:00.781 "is_configured": true, 00:26:00.781 "data_offset": 0, 00:26:00.781 "data_size": 65536 00:26:00.781 } 00:26:00.781 ] 00:26:00.781 } 00:26:00.781 } 00:26:00.781 }' 00:26:00.781 15:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:00.781 15:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:26:00.781 BaseBdev2 00:26:00.781 BaseBdev3' 00:26:00.781 15:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:00.781 15:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:00.781 15:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:00.781 15:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:26:00.781 15:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.781 15:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.781 15:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:00.781 15:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.781 15:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:00.781 15:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:00.781 15:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:00.781 15:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:00.781 15:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.781 15:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.781 15:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:00.781 15:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.781 15:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:00.781 15:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:00.781 15:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:00.781 15:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:00.781 15:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:26:00.781 15:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.781 15:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.781 15:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.038 15:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:01.038 15:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:01.038 15:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:01.038 15:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.038 15:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.038 [2024-11-05 15:55:33.207727] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:01.038 [2024-11-05 15:55:33.207829] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:01.038 [2024-11-05 15:55:33.207909] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:01.038 [2024-11-05 15:55:33.207958] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:01.038 [2024-11-05 15:55:33.207968] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:26:01.038 15:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.038 15:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62232 00:26:01.038 15:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 62232 ']' 00:26:01.038 15:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 62232 00:26:01.038 15:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:26:01.038 15:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:01.038 15:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62232 00:26:01.038 killing process with pid 62232 00:26:01.038 15:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:01.038 15:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:01.038 15:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62232' 00:26:01.038 15:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 62232 00:26:01.038 [2024-11-05 15:55:33.240310] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:01.038 15:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 62232 00:26:01.038 [2024-11-05 15:55:33.388854] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:01.604 ************************************ 00:26:01.604 END TEST raid_state_function_test 00:26:01.604 ************************************ 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:26:01.604 00:26:01.604 real 0m7.467s 00:26:01.604 user 0m12.189s 00:26:01.604 sys 0m1.126s 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.604 15:55:33 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:26:01.604 15:55:33 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:26:01.604 15:55:33 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:01.604 15:55:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:01.604 ************************************ 00:26:01.604 START TEST raid_state_function_test_sb 00:26:01.604 ************************************ 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 true 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:01.604 Process raid pid: 62820 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62820 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62820' 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62820 00:26:01.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 62820 ']' 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:01.604 15:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:26:01.862 [2024-11-05 15:55:34.062005] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:26:01.862 [2024-11-05 15:55:34.062395] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:01.862 [2024-11-05 15:55:34.231166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.121 [2024-11-05 15:55:34.359057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.121 [2024-11-05 15:55:34.495279] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:02.121 [2024-11-05 15:55:34.495317] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:02.686 15:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:02.686 15:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:26:02.686 15:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:02.686 15:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.686 15:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:02.686 [2024-11-05 15:55:34.896600] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:02.686 [2024-11-05 15:55:34.896651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:02.686 [2024-11-05 15:55:34.896662] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:02.686 [2024-11-05 15:55:34.896671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:02.686 [2024-11-05 15:55:34.896678] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:02.686 [2024-11-05 15:55:34.896687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:02.686 15:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.686 15:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:02.686 15:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:02.686 15:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:02.686 15:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:02.686 15:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:02.686 15:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:02.686 15:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:02.686 15:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:02.686 15:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:02.686 15:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:02.686 15:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:02.686 15:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.686 15:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:02.686 15:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:02.686 15:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.686 15:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:02.686 "name": "Existed_Raid", 00:26:02.686 "uuid": "809a5f07-bfdd-4fbd-b6f3-c7d467b1a09a", 00:26:02.686 "strip_size_kb": 64, 00:26:02.686 "state": "configuring", 00:26:02.686 "raid_level": "raid0", 00:26:02.686 "superblock": true, 00:26:02.686 "num_base_bdevs": 3, 00:26:02.686 "num_base_bdevs_discovered": 0, 00:26:02.686 "num_base_bdevs_operational": 3, 00:26:02.686 "base_bdevs_list": [ 00:26:02.686 { 00:26:02.686 "name": "BaseBdev1", 00:26:02.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:02.686 "is_configured": false, 00:26:02.686 "data_offset": 0, 00:26:02.686 "data_size": 0 00:26:02.686 }, 00:26:02.686 { 00:26:02.686 "name": "BaseBdev2", 00:26:02.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:02.686 "is_configured": false, 00:26:02.686 "data_offset": 0, 00:26:02.686 "data_size": 0 00:26:02.686 }, 00:26:02.686 { 00:26:02.686 "name": "BaseBdev3", 00:26:02.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:02.686 "is_configured": false, 00:26:02.686 "data_offset": 0, 00:26:02.686 "data_size": 0 00:26:02.686 } 00:26:02.686 ] 00:26:02.686 }' 00:26:02.686 15:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:02.686 15:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:02.944 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:02.944 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.944 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:02.944 [2024-11-05 15:55:35.216631] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:02.944 [2024-11-05 15:55:35.216663] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:26:02.944 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.944 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:02.944 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.944 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:02.944 [2024-11-05 15:55:35.224649] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:02.944 [2024-11-05 15:55:35.224691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:02.944 [2024-11-05 15:55:35.224699] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:02.944 [2024-11-05 15:55:35.224708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:02.944 [2024-11-05 15:55:35.224714] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:02.944 [2024-11-05 15:55:35.224722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:02.944 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.944 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:02.944 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.944 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:02.944 [2024-11-05 15:55:35.256874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:02.944 BaseBdev1 00:26:02.944 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.945 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:26:02.945 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:26:02.945 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:02.945 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:26:02.945 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:02.945 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:02.945 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:02.945 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.945 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:02.945 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.945 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:02.945 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.945 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:02.945 [ 00:26:02.945 { 00:26:02.945 "name": "BaseBdev1", 00:26:02.945 "aliases": [ 00:26:02.945 "e4fd22a7-9a28-407e-88bd-4441f0a3b32b" 00:26:02.945 ], 00:26:02.945 "product_name": "Malloc disk", 00:26:02.945 "block_size": 512, 00:26:02.945 "num_blocks": 65536, 00:26:02.945 "uuid": "e4fd22a7-9a28-407e-88bd-4441f0a3b32b", 00:26:02.945 "assigned_rate_limits": { 00:26:02.945 "rw_ios_per_sec": 0, 00:26:02.945 "rw_mbytes_per_sec": 0, 00:26:02.945 "r_mbytes_per_sec": 0, 00:26:02.945 "w_mbytes_per_sec": 0 00:26:02.945 }, 00:26:02.945 "claimed": true, 00:26:02.945 "claim_type": "exclusive_write", 00:26:02.945 "zoned": false, 00:26:02.945 "supported_io_types": { 00:26:02.945 "read": true, 00:26:02.945 "write": true, 00:26:02.945 "unmap": true, 00:26:02.945 "flush": true, 00:26:02.945 "reset": true, 00:26:02.945 "nvme_admin": false, 00:26:02.945 "nvme_io": false, 00:26:02.945 "nvme_io_md": false, 00:26:02.945 "write_zeroes": true, 00:26:02.945 "zcopy": true, 00:26:02.945 "get_zone_info": false, 00:26:02.945 "zone_management": false, 00:26:02.945 "zone_append": false, 00:26:02.945 "compare": false, 00:26:02.945 "compare_and_write": false, 00:26:02.945 "abort": true, 00:26:02.945 "seek_hole": false, 00:26:02.945 "seek_data": false, 00:26:02.945 "copy": true, 00:26:02.945 "nvme_iov_md": false 00:26:02.945 }, 00:26:02.945 "memory_domains": [ 00:26:02.945 { 00:26:02.945 "dma_device_id": "system", 00:26:02.945 "dma_device_type": 1 00:26:02.945 }, 00:26:02.945 { 00:26:02.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:02.945 "dma_device_type": 2 00:26:02.945 } 00:26:02.945 ], 00:26:02.945 "driver_specific": {} 00:26:02.945 } 00:26:02.945 ] 00:26:02.945 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.945 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:26:02.945 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:02.945 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:02.945 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:02.945 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:02.945 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:02.945 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:02.945 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:02.945 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:02.945 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:02.945 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:02.945 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:02.945 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.945 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:02.945 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:02.945 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.945 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:02.945 "name": "Existed_Raid", 00:26:02.945 "uuid": "ae09e163-794f-40a0-a551-75db905fd4a3", 00:26:02.945 "strip_size_kb": 64, 00:26:02.945 "state": "configuring", 00:26:02.945 "raid_level": "raid0", 00:26:02.945 "superblock": true, 00:26:02.945 "num_base_bdevs": 3, 00:26:02.945 "num_base_bdevs_discovered": 1, 00:26:02.945 "num_base_bdevs_operational": 3, 00:26:02.945 "base_bdevs_list": [ 00:26:02.945 { 00:26:02.945 "name": "BaseBdev1", 00:26:02.945 "uuid": "e4fd22a7-9a28-407e-88bd-4441f0a3b32b", 00:26:02.945 "is_configured": true, 00:26:02.945 "data_offset": 2048, 00:26:02.945 "data_size": 63488 00:26:02.945 }, 00:26:02.945 { 00:26:02.945 "name": "BaseBdev2", 00:26:02.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:02.945 "is_configured": false, 00:26:02.945 "data_offset": 0, 00:26:02.945 "data_size": 0 00:26:02.945 }, 00:26:02.945 { 00:26:02.945 "name": "BaseBdev3", 00:26:02.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:02.945 "is_configured": false, 00:26:02.945 "data_offset": 0, 00:26:02.945 "data_size": 0 00:26:02.945 } 00:26:02.945 ] 00:26:02.945 }' 00:26:02.945 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:02.945 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:03.240 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:03.240 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.240 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:03.240 [2024-11-05 15:55:35.573001] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:03.240 [2024-11-05 15:55:35.573157] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:26:03.240 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.240 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:03.240 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.240 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:03.240 [2024-11-05 15:55:35.581044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:03.240 [2024-11-05 15:55:35.582918] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:03.240 [2024-11-05 15:55:35.583045] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:03.240 [2024-11-05 15:55:35.583060] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:03.240 [2024-11-05 15:55:35.583070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:03.240 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.240 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:26:03.240 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:03.240 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:03.240 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:03.240 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:03.240 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:03.240 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:03.240 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:03.240 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:03.240 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:03.240 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:03.240 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:03.240 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:03.240 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.240 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:03.240 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:03.240 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.240 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:03.240 "name": "Existed_Raid", 00:26:03.240 "uuid": "ab400904-2352-4006-a71d-c60cca415af1", 00:26:03.240 "strip_size_kb": 64, 00:26:03.240 "state": "configuring", 00:26:03.240 "raid_level": "raid0", 00:26:03.240 "superblock": true, 00:26:03.240 "num_base_bdevs": 3, 00:26:03.240 "num_base_bdevs_discovered": 1, 00:26:03.240 "num_base_bdevs_operational": 3, 00:26:03.240 "base_bdevs_list": [ 00:26:03.240 { 00:26:03.240 "name": "BaseBdev1", 00:26:03.240 "uuid": "e4fd22a7-9a28-407e-88bd-4441f0a3b32b", 00:26:03.240 "is_configured": true, 00:26:03.240 "data_offset": 2048, 00:26:03.240 "data_size": 63488 00:26:03.240 }, 00:26:03.240 { 00:26:03.240 "name": "BaseBdev2", 00:26:03.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:03.240 "is_configured": false, 00:26:03.240 "data_offset": 0, 00:26:03.240 "data_size": 0 00:26:03.240 }, 00:26:03.240 { 00:26:03.240 "name": "BaseBdev3", 00:26:03.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:03.240 "is_configured": false, 00:26:03.240 "data_offset": 0, 00:26:03.240 "data_size": 0 00:26:03.240 } 00:26:03.240 ] 00:26:03.240 }' 00:26:03.240 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:03.240 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:03.806 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:03.806 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.806 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:03.806 [2024-11-05 15:55:35.951280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:03.806 BaseBdev2 00:26:03.806 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.806 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:26:03.806 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:26:03.806 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:03.806 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:26:03.806 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:03.806 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:03.806 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:03.806 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.806 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:03.806 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.806 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:03.806 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.807 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:03.807 [ 00:26:03.807 { 00:26:03.807 "name": "BaseBdev2", 00:26:03.807 "aliases": [ 00:26:03.807 "23cfca3f-c6db-4793-a6bc-d09ad0a21fa0" 00:26:03.807 ], 00:26:03.807 "product_name": "Malloc disk", 00:26:03.807 "block_size": 512, 00:26:03.807 "num_blocks": 65536, 00:26:03.807 "uuid": "23cfca3f-c6db-4793-a6bc-d09ad0a21fa0", 00:26:03.807 "assigned_rate_limits": { 00:26:03.807 "rw_ios_per_sec": 0, 00:26:03.807 "rw_mbytes_per_sec": 0, 00:26:03.807 "r_mbytes_per_sec": 0, 00:26:03.807 "w_mbytes_per_sec": 0 00:26:03.807 }, 00:26:03.807 "claimed": true, 00:26:03.807 "claim_type": "exclusive_write", 00:26:03.807 "zoned": false, 00:26:03.807 "supported_io_types": { 00:26:03.807 "read": true, 00:26:03.807 "write": true, 00:26:03.807 "unmap": true, 00:26:03.807 "flush": true, 00:26:03.807 "reset": true, 00:26:03.807 "nvme_admin": false, 00:26:03.807 "nvme_io": false, 00:26:03.807 "nvme_io_md": false, 00:26:03.807 "write_zeroes": true, 00:26:03.807 "zcopy": true, 00:26:03.807 "get_zone_info": false, 00:26:03.807 "zone_management": false, 00:26:03.807 "zone_append": false, 00:26:03.807 "compare": false, 00:26:03.807 "compare_and_write": false, 00:26:03.807 "abort": true, 00:26:03.807 "seek_hole": false, 00:26:03.807 "seek_data": false, 00:26:03.807 "copy": true, 00:26:03.807 "nvme_iov_md": false 00:26:03.807 }, 00:26:03.807 "memory_domains": [ 00:26:03.807 { 00:26:03.807 "dma_device_id": "system", 00:26:03.807 "dma_device_type": 1 00:26:03.807 }, 00:26:03.807 { 00:26:03.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:03.807 "dma_device_type": 2 00:26:03.807 } 00:26:03.807 ], 00:26:03.807 "driver_specific": {} 00:26:03.807 } 00:26:03.807 ] 00:26:03.807 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.807 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:26:03.807 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:03.807 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:03.807 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:03.807 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:03.807 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:03.807 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:03.807 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:03.807 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:03.807 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:03.807 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:03.807 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:03.807 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:03.807 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:03.807 15:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:03.807 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.807 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:03.807 15:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.807 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:03.807 "name": "Existed_Raid", 00:26:03.807 "uuid": "ab400904-2352-4006-a71d-c60cca415af1", 00:26:03.807 "strip_size_kb": 64, 00:26:03.807 "state": "configuring", 00:26:03.807 "raid_level": "raid0", 00:26:03.807 "superblock": true, 00:26:03.807 "num_base_bdevs": 3, 00:26:03.807 "num_base_bdevs_discovered": 2, 00:26:03.807 "num_base_bdevs_operational": 3, 00:26:03.807 "base_bdevs_list": [ 00:26:03.807 { 00:26:03.807 "name": "BaseBdev1", 00:26:03.807 "uuid": "e4fd22a7-9a28-407e-88bd-4441f0a3b32b", 00:26:03.807 "is_configured": true, 00:26:03.807 "data_offset": 2048, 00:26:03.807 "data_size": 63488 00:26:03.807 }, 00:26:03.807 { 00:26:03.807 "name": "BaseBdev2", 00:26:03.807 "uuid": "23cfca3f-c6db-4793-a6bc-d09ad0a21fa0", 00:26:03.807 "is_configured": true, 00:26:03.807 "data_offset": 2048, 00:26:03.807 "data_size": 63488 00:26:03.807 }, 00:26:03.807 { 00:26:03.807 "name": "BaseBdev3", 00:26:03.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:03.807 "is_configured": false, 00:26:03.807 "data_offset": 0, 00:26:03.807 "data_size": 0 00:26:03.807 } 00:26:03.807 ] 00:26:03.807 }' 00:26:03.807 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:03.807 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.065 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:26:04.065 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.065 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.065 [2024-11-05 15:55:36.313631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:04.065 [2024-11-05 15:55:36.313878] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:04.065 [2024-11-05 15:55:36.313900] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:04.065 [2024-11-05 15:55:36.314166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:26:04.065 BaseBdev3 00:26:04.065 [2024-11-05 15:55:36.314322] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:04.065 [2024-11-05 15:55:36.314337] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:26:04.065 [2024-11-05 15:55:36.314470] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:04.065 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.065 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:26:04.065 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:26:04.065 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:04.065 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:26:04.065 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:04.065 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:04.065 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:04.065 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.065 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.065 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.065 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:04.065 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.065 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.065 [ 00:26:04.065 { 00:26:04.065 "name": "BaseBdev3", 00:26:04.065 "aliases": [ 00:26:04.065 "1d4ad220-a23e-49bd-99e1-d8bd2abfd0e6" 00:26:04.065 ], 00:26:04.065 "product_name": "Malloc disk", 00:26:04.065 "block_size": 512, 00:26:04.065 "num_blocks": 65536, 00:26:04.065 "uuid": "1d4ad220-a23e-49bd-99e1-d8bd2abfd0e6", 00:26:04.065 "assigned_rate_limits": { 00:26:04.065 "rw_ios_per_sec": 0, 00:26:04.065 "rw_mbytes_per_sec": 0, 00:26:04.065 "r_mbytes_per_sec": 0, 00:26:04.065 "w_mbytes_per_sec": 0 00:26:04.065 }, 00:26:04.065 "claimed": true, 00:26:04.065 "claim_type": "exclusive_write", 00:26:04.065 "zoned": false, 00:26:04.065 "supported_io_types": { 00:26:04.065 "read": true, 00:26:04.065 "write": true, 00:26:04.065 "unmap": true, 00:26:04.065 "flush": true, 00:26:04.065 "reset": true, 00:26:04.065 "nvme_admin": false, 00:26:04.065 "nvme_io": false, 00:26:04.065 "nvme_io_md": false, 00:26:04.065 "write_zeroes": true, 00:26:04.065 "zcopy": true, 00:26:04.065 "get_zone_info": false, 00:26:04.065 "zone_management": false, 00:26:04.065 "zone_append": false, 00:26:04.065 "compare": false, 00:26:04.065 "compare_and_write": false, 00:26:04.065 "abort": true, 00:26:04.065 "seek_hole": false, 00:26:04.066 "seek_data": false, 00:26:04.066 "copy": true, 00:26:04.066 "nvme_iov_md": false 00:26:04.066 }, 00:26:04.066 "memory_domains": [ 00:26:04.066 { 00:26:04.066 "dma_device_id": "system", 00:26:04.066 "dma_device_type": 1 00:26:04.066 }, 00:26:04.066 { 00:26:04.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:04.066 "dma_device_type": 2 00:26:04.066 } 00:26:04.066 ], 00:26:04.066 "driver_specific": {} 00:26:04.066 } 00:26:04.066 ] 00:26:04.066 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.066 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:26:04.066 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:04.066 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:04.066 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:26:04.066 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:04.066 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:04.066 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:04.066 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:04.066 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:04.066 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:04.066 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:04.066 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:04.066 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:04.066 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:04.066 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:04.066 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.066 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.066 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.066 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:04.066 "name": "Existed_Raid", 00:26:04.066 "uuid": "ab400904-2352-4006-a71d-c60cca415af1", 00:26:04.066 "strip_size_kb": 64, 00:26:04.066 "state": "online", 00:26:04.066 "raid_level": "raid0", 00:26:04.066 "superblock": true, 00:26:04.066 "num_base_bdevs": 3, 00:26:04.066 "num_base_bdevs_discovered": 3, 00:26:04.066 "num_base_bdevs_operational": 3, 00:26:04.066 "base_bdevs_list": [ 00:26:04.066 { 00:26:04.066 "name": "BaseBdev1", 00:26:04.066 "uuid": "e4fd22a7-9a28-407e-88bd-4441f0a3b32b", 00:26:04.066 "is_configured": true, 00:26:04.066 "data_offset": 2048, 00:26:04.066 "data_size": 63488 00:26:04.066 }, 00:26:04.066 { 00:26:04.066 "name": "BaseBdev2", 00:26:04.066 "uuid": "23cfca3f-c6db-4793-a6bc-d09ad0a21fa0", 00:26:04.066 "is_configured": true, 00:26:04.066 "data_offset": 2048, 00:26:04.066 "data_size": 63488 00:26:04.066 }, 00:26:04.066 { 00:26:04.066 "name": "BaseBdev3", 00:26:04.066 "uuid": "1d4ad220-a23e-49bd-99e1-d8bd2abfd0e6", 00:26:04.066 "is_configured": true, 00:26:04.066 "data_offset": 2048, 00:26:04.066 "data_size": 63488 00:26:04.066 } 00:26:04.066 ] 00:26:04.066 }' 00:26:04.066 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:04.066 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.324 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:26:04.324 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:04.324 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:04.324 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:04.324 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:26:04.324 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:04.324 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:04.324 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:04.324 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.324 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.324 [2024-11-05 15:55:36.690123] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:04.324 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.324 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:04.324 "name": "Existed_Raid", 00:26:04.324 "aliases": [ 00:26:04.324 "ab400904-2352-4006-a71d-c60cca415af1" 00:26:04.324 ], 00:26:04.324 "product_name": "Raid Volume", 00:26:04.324 "block_size": 512, 00:26:04.324 "num_blocks": 190464, 00:26:04.324 "uuid": "ab400904-2352-4006-a71d-c60cca415af1", 00:26:04.324 "assigned_rate_limits": { 00:26:04.324 "rw_ios_per_sec": 0, 00:26:04.324 "rw_mbytes_per_sec": 0, 00:26:04.324 "r_mbytes_per_sec": 0, 00:26:04.324 "w_mbytes_per_sec": 0 00:26:04.324 }, 00:26:04.324 "claimed": false, 00:26:04.324 "zoned": false, 00:26:04.324 "supported_io_types": { 00:26:04.324 "read": true, 00:26:04.324 "write": true, 00:26:04.324 "unmap": true, 00:26:04.324 "flush": true, 00:26:04.324 "reset": true, 00:26:04.324 "nvme_admin": false, 00:26:04.324 "nvme_io": false, 00:26:04.324 "nvme_io_md": false, 00:26:04.324 "write_zeroes": true, 00:26:04.324 "zcopy": false, 00:26:04.324 "get_zone_info": false, 00:26:04.324 "zone_management": false, 00:26:04.324 "zone_append": false, 00:26:04.324 "compare": false, 00:26:04.324 "compare_and_write": false, 00:26:04.324 "abort": false, 00:26:04.324 "seek_hole": false, 00:26:04.324 "seek_data": false, 00:26:04.324 "copy": false, 00:26:04.324 "nvme_iov_md": false 00:26:04.324 }, 00:26:04.324 "memory_domains": [ 00:26:04.324 { 00:26:04.324 "dma_device_id": "system", 00:26:04.324 "dma_device_type": 1 00:26:04.324 }, 00:26:04.324 { 00:26:04.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:04.324 "dma_device_type": 2 00:26:04.324 }, 00:26:04.324 { 00:26:04.324 "dma_device_id": "system", 00:26:04.324 "dma_device_type": 1 00:26:04.324 }, 00:26:04.324 { 00:26:04.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:04.324 "dma_device_type": 2 00:26:04.324 }, 00:26:04.324 { 00:26:04.324 "dma_device_id": "system", 00:26:04.324 "dma_device_type": 1 00:26:04.324 }, 00:26:04.324 { 00:26:04.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:04.324 "dma_device_type": 2 00:26:04.324 } 00:26:04.324 ], 00:26:04.324 "driver_specific": { 00:26:04.324 "raid": { 00:26:04.324 "uuid": "ab400904-2352-4006-a71d-c60cca415af1", 00:26:04.324 "strip_size_kb": 64, 00:26:04.324 "state": "online", 00:26:04.324 "raid_level": "raid0", 00:26:04.324 "superblock": true, 00:26:04.324 "num_base_bdevs": 3, 00:26:04.324 "num_base_bdevs_discovered": 3, 00:26:04.324 "num_base_bdevs_operational": 3, 00:26:04.324 "base_bdevs_list": [ 00:26:04.324 { 00:26:04.324 "name": "BaseBdev1", 00:26:04.324 "uuid": "e4fd22a7-9a28-407e-88bd-4441f0a3b32b", 00:26:04.324 "is_configured": true, 00:26:04.324 "data_offset": 2048, 00:26:04.324 "data_size": 63488 00:26:04.324 }, 00:26:04.324 { 00:26:04.324 "name": "BaseBdev2", 00:26:04.324 "uuid": "23cfca3f-c6db-4793-a6bc-d09ad0a21fa0", 00:26:04.324 "is_configured": true, 00:26:04.324 "data_offset": 2048, 00:26:04.324 "data_size": 63488 00:26:04.324 }, 00:26:04.324 { 00:26:04.324 "name": "BaseBdev3", 00:26:04.324 "uuid": "1d4ad220-a23e-49bd-99e1-d8bd2abfd0e6", 00:26:04.324 "is_configured": true, 00:26:04.324 "data_offset": 2048, 00:26:04.324 "data_size": 63488 00:26:04.324 } 00:26:04.324 ] 00:26:04.324 } 00:26:04.324 } 00:26:04.324 }' 00:26:04.324 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:04.324 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:26:04.324 BaseBdev2 00:26:04.324 BaseBdev3' 00:26:04.324 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.582 [2024-11-05 15:55:36.869866] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:04.582 [2024-11-05 15:55:36.869890] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:04.582 [2024-11-05 15:55:36.869941] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:04.582 "name": "Existed_Raid", 00:26:04.582 "uuid": "ab400904-2352-4006-a71d-c60cca415af1", 00:26:04.582 "strip_size_kb": 64, 00:26:04.582 "state": "offline", 00:26:04.582 "raid_level": "raid0", 00:26:04.582 "superblock": true, 00:26:04.582 "num_base_bdevs": 3, 00:26:04.582 "num_base_bdevs_discovered": 2, 00:26:04.582 "num_base_bdevs_operational": 2, 00:26:04.582 "base_bdevs_list": [ 00:26:04.582 { 00:26:04.582 "name": null, 00:26:04.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:04.582 "is_configured": false, 00:26:04.582 "data_offset": 0, 00:26:04.582 "data_size": 63488 00:26:04.582 }, 00:26:04.582 { 00:26:04.582 "name": "BaseBdev2", 00:26:04.582 "uuid": "23cfca3f-c6db-4793-a6bc-d09ad0a21fa0", 00:26:04.582 "is_configured": true, 00:26:04.582 "data_offset": 2048, 00:26:04.582 "data_size": 63488 00:26:04.582 }, 00:26:04.582 { 00:26:04.582 "name": "BaseBdev3", 00:26:04.582 "uuid": "1d4ad220-a23e-49bd-99e1-d8bd2abfd0e6", 00:26:04.582 "is_configured": true, 00:26:04.582 "data_offset": 2048, 00:26:04.582 "data_size": 63488 00:26:04.582 } 00:26:04.582 ] 00:26:04.582 }' 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:04.582 15:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.840 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:26:04.840 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:04.840 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:04.840 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:04.840 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.840 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.097 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.097 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:05.097 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:05.097 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:26:05.097 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.097 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.097 [2024-11-05 15:55:37.285480] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:05.097 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.097 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:05.097 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:05.097 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:05.097 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.097 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:05.097 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.097 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.097 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:05.097 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:05.097 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:26:05.097 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.098 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.098 [2024-11-05 15:55:37.384968] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:05.098 [2024-11-05 15:55:37.385012] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:26:05.098 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.098 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:05.098 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:05.098 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:05.098 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:26:05.098 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.098 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.098 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.098 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:26:05.098 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:26:05.098 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:26:05.098 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:26:05.098 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:05.098 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:05.098 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.098 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.098 BaseBdev2 00:26:05.098 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.098 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:26:05.098 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:26:05.098 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:05.098 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:26:05.098 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:05.098 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:05.098 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:05.098 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.098 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.356 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.356 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:05.356 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.356 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.356 [ 00:26:05.356 { 00:26:05.356 "name": "BaseBdev2", 00:26:05.356 "aliases": [ 00:26:05.356 "9dbddf2c-44ca-451c-889b-daf45385d31c" 00:26:05.356 ], 00:26:05.356 "product_name": "Malloc disk", 00:26:05.356 "block_size": 512, 00:26:05.356 "num_blocks": 65536, 00:26:05.356 "uuid": "9dbddf2c-44ca-451c-889b-daf45385d31c", 00:26:05.356 "assigned_rate_limits": { 00:26:05.356 "rw_ios_per_sec": 0, 00:26:05.356 "rw_mbytes_per_sec": 0, 00:26:05.356 "r_mbytes_per_sec": 0, 00:26:05.356 "w_mbytes_per_sec": 0 00:26:05.356 }, 00:26:05.356 "claimed": false, 00:26:05.356 "zoned": false, 00:26:05.356 "supported_io_types": { 00:26:05.356 "read": true, 00:26:05.356 "write": true, 00:26:05.356 "unmap": true, 00:26:05.356 "flush": true, 00:26:05.356 "reset": true, 00:26:05.356 "nvme_admin": false, 00:26:05.356 "nvme_io": false, 00:26:05.356 "nvme_io_md": false, 00:26:05.356 "write_zeroes": true, 00:26:05.356 "zcopy": true, 00:26:05.356 "get_zone_info": false, 00:26:05.356 "zone_management": false, 00:26:05.356 "zone_append": false, 00:26:05.356 "compare": false, 00:26:05.356 "compare_and_write": false, 00:26:05.356 "abort": true, 00:26:05.356 "seek_hole": false, 00:26:05.356 "seek_data": false, 00:26:05.356 "copy": true, 00:26:05.356 "nvme_iov_md": false 00:26:05.356 }, 00:26:05.356 "memory_domains": [ 00:26:05.356 { 00:26:05.356 "dma_device_id": "system", 00:26:05.356 "dma_device_type": 1 00:26:05.356 }, 00:26:05.356 { 00:26:05.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:05.356 "dma_device_type": 2 00:26:05.356 } 00:26:05.356 ], 00:26:05.356 "driver_specific": {} 00:26:05.356 } 00:26:05.356 ] 00:26:05.356 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.356 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:26:05.356 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:05.356 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:05.356 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:26:05.356 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.356 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.356 BaseBdev3 00:26:05.356 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.356 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:26:05.356 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:26:05.356 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:05.356 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:26:05.356 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:05.356 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:05.356 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:05.356 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.356 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.356 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.356 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:05.356 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.356 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.356 [ 00:26:05.356 { 00:26:05.356 "name": "BaseBdev3", 00:26:05.356 "aliases": [ 00:26:05.356 "f94395e3-f336-497a-a8fd-ba963d43e9ce" 00:26:05.356 ], 00:26:05.356 "product_name": "Malloc disk", 00:26:05.356 "block_size": 512, 00:26:05.356 "num_blocks": 65536, 00:26:05.356 "uuid": "f94395e3-f336-497a-a8fd-ba963d43e9ce", 00:26:05.356 "assigned_rate_limits": { 00:26:05.356 "rw_ios_per_sec": 0, 00:26:05.356 "rw_mbytes_per_sec": 0, 00:26:05.356 "r_mbytes_per_sec": 0, 00:26:05.356 "w_mbytes_per_sec": 0 00:26:05.356 }, 00:26:05.356 "claimed": false, 00:26:05.356 "zoned": false, 00:26:05.356 "supported_io_types": { 00:26:05.356 "read": true, 00:26:05.356 "write": true, 00:26:05.356 "unmap": true, 00:26:05.356 "flush": true, 00:26:05.356 "reset": true, 00:26:05.356 "nvme_admin": false, 00:26:05.356 "nvme_io": false, 00:26:05.356 "nvme_io_md": false, 00:26:05.356 "write_zeroes": true, 00:26:05.356 "zcopy": true, 00:26:05.356 "get_zone_info": false, 00:26:05.356 "zone_management": false, 00:26:05.356 "zone_append": false, 00:26:05.356 "compare": false, 00:26:05.356 "compare_and_write": false, 00:26:05.356 "abort": true, 00:26:05.356 "seek_hole": false, 00:26:05.356 "seek_data": false, 00:26:05.356 "copy": true, 00:26:05.356 "nvme_iov_md": false 00:26:05.356 }, 00:26:05.357 "memory_domains": [ 00:26:05.357 { 00:26:05.357 "dma_device_id": "system", 00:26:05.357 "dma_device_type": 1 00:26:05.357 }, 00:26:05.357 { 00:26:05.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:05.357 "dma_device_type": 2 00:26:05.357 } 00:26:05.357 ], 00:26:05.357 "driver_specific": {} 00:26:05.357 } 00:26:05.357 ] 00:26:05.357 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.357 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:26:05.357 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:05.357 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:05.357 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:05.357 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.357 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.357 [2024-11-05 15:55:37.592782] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:05.357 [2024-11-05 15:55:37.592938] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:05.357 [2024-11-05 15:55:37.593006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:05.357 [2024-11-05 15:55:37.594560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:05.357 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.357 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:05.357 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:05.357 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:05.357 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:05.357 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:05.357 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:05.357 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:05.357 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:05.357 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:05.357 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:05.357 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:05.357 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.357 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:05.357 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.357 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.357 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:05.357 "name": "Existed_Raid", 00:26:05.357 "uuid": "8ab9b6b9-bc25-42ec-b1b5-ea56487d17fe", 00:26:05.357 "strip_size_kb": 64, 00:26:05.357 "state": "configuring", 00:26:05.357 "raid_level": "raid0", 00:26:05.357 "superblock": true, 00:26:05.357 "num_base_bdevs": 3, 00:26:05.357 "num_base_bdevs_discovered": 2, 00:26:05.357 "num_base_bdevs_operational": 3, 00:26:05.357 "base_bdevs_list": [ 00:26:05.357 { 00:26:05.357 "name": "BaseBdev1", 00:26:05.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:05.357 "is_configured": false, 00:26:05.357 "data_offset": 0, 00:26:05.357 "data_size": 0 00:26:05.357 }, 00:26:05.357 { 00:26:05.357 "name": "BaseBdev2", 00:26:05.357 "uuid": "9dbddf2c-44ca-451c-889b-daf45385d31c", 00:26:05.357 "is_configured": true, 00:26:05.357 "data_offset": 2048, 00:26:05.357 "data_size": 63488 00:26:05.357 }, 00:26:05.357 { 00:26:05.357 "name": "BaseBdev3", 00:26:05.357 "uuid": "f94395e3-f336-497a-a8fd-ba963d43e9ce", 00:26:05.357 "is_configured": true, 00:26:05.357 "data_offset": 2048, 00:26:05.357 "data_size": 63488 00:26:05.357 } 00:26:05.357 ] 00:26:05.357 }' 00:26:05.357 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:05.357 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.615 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:26:05.615 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.615 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.615 [2024-11-05 15:55:37.924860] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:05.615 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.615 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:05.615 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:05.615 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:05.615 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:05.615 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:05.615 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:05.615 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:05.615 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:05.615 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:05.615 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:05.615 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:05.615 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:05.615 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.615 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.615 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.615 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:05.615 "name": "Existed_Raid", 00:26:05.615 "uuid": "8ab9b6b9-bc25-42ec-b1b5-ea56487d17fe", 00:26:05.615 "strip_size_kb": 64, 00:26:05.615 "state": "configuring", 00:26:05.615 "raid_level": "raid0", 00:26:05.615 "superblock": true, 00:26:05.615 "num_base_bdevs": 3, 00:26:05.615 "num_base_bdevs_discovered": 1, 00:26:05.615 "num_base_bdevs_operational": 3, 00:26:05.615 "base_bdevs_list": [ 00:26:05.615 { 00:26:05.615 "name": "BaseBdev1", 00:26:05.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:05.615 "is_configured": false, 00:26:05.615 "data_offset": 0, 00:26:05.615 "data_size": 0 00:26:05.615 }, 00:26:05.615 { 00:26:05.615 "name": null, 00:26:05.615 "uuid": "9dbddf2c-44ca-451c-889b-daf45385d31c", 00:26:05.615 "is_configured": false, 00:26:05.615 "data_offset": 0, 00:26:05.615 "data_size": 63488 00:26:05.615 }, 00:26:05.615 { 00:26:05.615 "name": "BaseBdev3", 00:26:05.615 "uuid": "f94395e3-f336-497a-a8fd-ba963d43e9ce", 00:26:05.615 "is_configured": true, 00:26:05.616 "data_offset": 2048, 00:26:05.616 "data_size": 63488 00:26:05.616 } 00:26:05.616 ] 00:26:05.616 }' 00:26:05.616 15:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:05.616 15:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.874 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:05.874 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:05.874 15:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.874 15:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.874 15:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.874 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:26:05.874 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:05.874 15:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.874 15:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.133 [2024-11-05 15:55:38.299149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:06.133 BaseBdev1 00:26:06.133 15:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.133 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:26:06.133 15:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:26:06.133 15:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:06.133 15:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:26:06.133 15:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:06.133 15:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:06.133 15:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:06.133 15:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.133 15:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.133 15:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.133 15:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:06.133 15:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.133 15:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.133 [ 00:26:06.133 { 00:26:06.133 "name": "BaseBdev1", 00:26:06.133 "aliases": [ 00:26:06.133 "873b1cb2-1ef5-4f26-bdaf-6dd0e20365db" 00:26:06.133 ], 00:26:06.133 "product_name": "Malloc disk", 00:26:06.133 "block_size": 512, 00:26:06.133 "num_blocks": 65536, 00:26:06.133 "uuid": "873b1cb2-1ef5-4f26-bdaf-6dd0e20365db", 00:26:06.133 "assigned_rate_limits": { 00:26:06.133 "rw_ios_per_sec": 0, 00:26:06.133 "rw_mbytes_per_sec": 0, 00:26:06.133 "r_mbytes_per_sec": 0, 00:26:06.133 "w_mbytes_per_sec": 0 00:26:06.133 }, 00:26:06.133 "claimed": true, 00:26:06.133 "claim_type": "exclusive_write", 00:26:06.133 "zoned": false, 00:26:06.133 "supported_io_types": { 00:26:06.133 "read": true, 00:26:06.133 "write": true, 00:26:06.133 "unmap": true, 00:26:06.133 "flush": true, 00:26:06.133 "reset": true, 00:26:06.133 "nvme_admin": false, 00:26:06.133 "nvme_io": false, 00:26:06.133 "nvme_io_md": false, 00:26:06.133 "write_zeroes": true, 00:26:06.133 "zcopy": true, 00:26:06.133 "get_zone_info": false, 00:26:06.133 "zone_management": false, 00:26:06.133 "zone_append": false, 00:26:06.133 "compare": false, 00:26:06.133 "compare_and_write": false, 00:26:06.133 "abort": true, 00:26:06.133 "seek_hole": false, 00:26:06.133 "seek_data": false, 00:26:06.133 "copy": true, 00:26:06.133 "nvme_iov_md": false 00:26:06.133 }, 00:26:06.133 "memory_domains": [ 00:26:06.133 { 00:26:06.133 "dma_device_id": "system", 00:26:06.133 "dma_device_type": 1 00:26:06.133 }, 00:26:06.133 { 00:26:06.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:06.133 "dma_device_type": 2 00:26:06.133 } 00:26:06.133 ], 00:26:06.133 "driver_specific": {} 00:26:06.133 } 00:26:06.133 ] 00:26:06.133 15:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.133 15:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:26:06.133 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:06.133 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:06.133 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:06.133 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:06.133 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:06.134 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:06.134 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:06.134 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:06.134 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:06.134 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:06.134 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:06.134 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:06.134 15:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.134 15:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.134 15:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.134 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:06.134 "name": "Existed_Raid", 00:26:06.134 "uuid": "8ab9b6b9-bc25-42ec-b1b5-ea56487d17fe", 00:26:06.134 "strip_size_kb": 64, 00:26:06.134 "state": "configuring", 00:26:06.134 "raid_level": "raid0", 00:26:06.134 "superblock": true, 00:26:06.134 "num_base_bdevs": 3, 00:26:06.134 "num_base_bdevs_discovered": 2, 00:26:06.134 "num_base_bdevs_operational": 3, 00:26:06.134 "base_bdevs_list": [ 00:26:06.134 { 00:26:06.134 "name": "BaseBdev1", 00:26:06.134 "uuid": "873b1cb2-1ef5-4f26-bdaf-6dd0e20365db", 00:26:06.134 "is_configured": true, 00:26:06.134 "data_offset": 2048, 00:26:06.134 "data_size": 63488 00:26:06.134 }, 00:26:06.134 { 00:26:06.134 "name": null, 00:26:06.134 "uuid": "9dbddf2c-44ca-451c-889b-daf45385d31c", 00:26:06.134 "is_configured": false, 00:26:06.134 "data_offset": 0, 00:26:06.134 "data_size": 63488 00:26:06.134 }, 00:26:06.134 { 00:26:06.134 "name": "BaseBdev3", 00:26:06.134 "uuid": "f94395e3-f336-497a-a8fd-ba963d43e9ce", 00:26:06.134 "is_configured": true, 00:26:06.134 "data_offset": 2048, 00:26:06.134 "data_size": 63488 00:26:06.134 } 00:26:06.134 ] 00:26:06.134 }' 00:26:06.134 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:06.134 15:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.392 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:06.392 15:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.392 15:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.392 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:06.392 15:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.392 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:26:06.392 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:26:06.392 15:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.392 15:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.392 [2024-11-05 15:55:38.687265] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:06.392 15:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.392 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:06.392 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:06.392 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:06.392 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:06.392 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:06.392 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:06.392 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:06.392 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:06.392 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:06.392 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:06.392 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:06.392 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:06.392 15:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.392 15:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.392 15:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.392 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:06.392 "name": "Existed_Raid", 00:26:06.392 "uuid": "8ab9b6b9-bc25-42ec-b1b5-ea56487d17fe", 00:26:06.392 "strip_size_kb": 64, 00:26:06.392 "state": "configuring", 00:26:06.392 "raid_level": "raid0", 00:26:06.392 "superblock": true, 00:26:06.392 "num_base_bdevs": 3, 00:26:06.392 "num_base_bdevs_discovered": 1, 00:26:06.392 "num_base_bdevs_operational": 3, 00:26:06.392 "base_bdevs_list": [ 00:26:06.392 { 00:26:06.392 "name": "BaseBdev1", 00:26:06.392 "uuid": "873b1cb2-1ef5-4f26-bdaf-6dd0e20365db", 00:26:06.392 "is_configured": true, 00:26:06.392 "data_offset": 2048, 00:26:06.392 "data_size": 63488 00:26:06.392 }, 00:26:06.393 { 00:26:06.393 "name": null, 00:26:06.393 "uuid": "9dbddf2c-44ca-451c-889b-daf45385d31c", 00:26:06.393 "is_configured": false, 00:26:06.393 "data_offset": 0, 00:26:06.393 "data_size": 63488 00:26:06.393 }, 00:26:06.393 { 00:26:06.393 "name": null, 00:26:06.393 "uuid": "f94395e3-f336-497a-a8fd-ba963d43e9ce", 00:26:06.393 "is_configured": false, 00:26:06.393 "data_offset": 0, 00:26:06.393 "data_size": 63488 00:26:06.393 } 00:26:06.393 ] 00:26:06.393 }' 00:26:06.393 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:06.393 15:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.651 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:06.651 15:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:06.651 15:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.651 15:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.651 15:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.651 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:26:06.651 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:26:06.651 15:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.651 15:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.651 [2024-11-05 15:55:39.031353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:06.651 15:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.651 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:06.651 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:06.651 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:06.651 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:06.651 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:06.651 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:06.651 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:06.651 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:06.651 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:06.651 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:06.651 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:06.651 15:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.651 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:06.651 15:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.651 15:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.922 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:06.922 "name": "Existed_Raid", 00:26:06.922 "uuid": "8ab9b6b9-bc25-42ec-b1b5-ea56487d17fe", 00:26:06.922 "strip_size_kb": 64, 00:26:06.922 "state": "configuring", 00:26:06.922 "raid_level": "raid0", 00:26:06.922 "superblock": true, 00:26:06.922 "num_base_bdevs": 3, 00:26:06.922 "num_base_bdevs_discovered": 2, 00:26:06.922 "num_base_bdevs_operational": 3, 00:26:06.922 "base_bdevs_list": [ 00:26:06.922 { 00:26:06.922 "name": "BaseBdev1", 00:26:06.923 "uuid": "873b1cb2-1ef5-4f26-bdaf-6dd0e20365db", 00:26:06.923 "is_configured": true, 00:26:06.923 "data_offset": 2048, 00:26:06.923 "data_size": 63488 00:26:06.923 }, 00:26:06.923 { 00:26:06.923 "name": null, 00:26:06.923 "uuid": "9dbddf2c-44ca-451c-889b-daf45385d31c", 00:26:06.923 "is_configured": false, 00:26:06.923 "data_offset": 0, 00:26:06.923 "data_size": 63488 00:26:06.923 }, 00:26:06.923 { 00:26:06.923 "name": "BaseBdev3", 00:26:06.923 "uuid": "f94395e3-f336-497a-a8fd-ba963d43e9ce", 00:26:06.923 "is_configured": true, 00:26:06.923 "data_offset": 2048, 00:26:06.923 "data_size": 63488 00:26:06.923 } 00:26:06.923 ] 00:26:06.923 }' 00:26:06.923 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:06.923 15:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.198 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:07.198 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:07.198 15:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.198 15:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.198 15:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.198 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:26:07.198 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:07.199 15:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.199 15:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.199 [2024-11-05 15:55:39.391436] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:07.199 15:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.199 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:07.199 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:07.199 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:07.199 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:07.199 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:07.199 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:07.199 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:07.199 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:07.199 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:07.199 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:07.199 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:07.199 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:07.199 15:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.199 15:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.199 15:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.199 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:07.199 "name": "Existed_Raid", 00:26:07.199 "uuid": "8ab9b6b9-bc25-42ec-b1b5-ea56487d17fe", 00:26:07.199 "strip_size_kb": 64, 00:26:07.199 "state": "configuring", 00:26:07.199 "raid_level": "raid0", 00:26:07.199 "superblock": true, 00:26:07.199 "num_base_bdevs": 3, 00:26:07.199 "num_base_bdevs_discovered": 1, 00:26:07.199 "num_base_bdevs_operational": 3, 00:26:07.199 "base_bdevs_list": [ 00:26:07.199 { 00:26:07.199 "name": null, 00:26:07.199 "uuid": "873b1cb2-1ef5-4f26-bdaf-6dd0e20365db", 00:26:07.199 "is_configured": false, 00:26:07.199 "data_offset": 0, 00:26:07.199 "data_size": 63488 00:26:07.199 }, 00:26:07.199 { 00:26:07.199 "name": null, 00:26:07.199 "uuid": "9dbddf2c-44ca-451c-889b-daf45385d31c", 00:26:07.199 "is_configured": false, 00:26:07.199 "data_offset": 0, 00:26:07.199 "data_size": 63488 00:26:07.199 }, 00:26:07.199 { 00:26:07.199 "name": "BaseBdev3", 00:26:07.199 "uuid": "f94395e3-f336-497a-a8fd-ba963d43e9ce", 00:26:07.199 "is_configured": true, 00:26:07.199 "data_offset": 2048, 00:26:07.199 "data_size": 63488 00:26:07.199 } 00:26:07.199 ] 00:26:07.199 }' 00:26:07.199 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:07.199 15:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.457 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:07.457 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:07.457 15:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.457 15:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.457 15:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.457 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:26:07.457 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:07.457 15:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.457 15:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.457 [2024-11-05 15:55:39.778159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:07.457 15:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.457 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:07.457 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:07.457 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:07.457 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:07.457 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:07.457 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:07.457 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:07.457 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:07.457 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:07.457 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:07.457 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:07.457 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:07.457 15:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.457 15:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.457 15:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.457 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:07.457 "name": "Existed_Raid", 00:26:07.457 "uuid": "8ab9b6b9-bc25-42ec-b1b5-ea56487d17fe", 00:26:07.457 "strip_size_kb": 64, 00:26:07.457 "state": "configuring", 00:26:07.457 "raid_level": "raid0", 00:26:07.457 "superblock": true, 00:26:07.457 "num_base_bdevs": 3, 00:26:07.457 "num_base_bdevs_discovered": 2, 00:26:07.457 "num_base_bdevs_operational": 3, 00:26:07.457 "base_bdevs_list": [ 00:26:07.457 { 00:26:07.457 "name": null, 00:26:07.457 "uuid": "873b1cb2-1ef5-4f26-bdaf-6dd0e20365db", 00:26:07.457 "is_configured": false, 00:26:07.457 "data_offset": 0, 00:26:07.457 "data_size": 63488 00:26:07.457 }, 00:26:07.457 { 00:26:07.457 "name": "BaseBdev2", 00:26:07.457 "uuid": "9dbddf2c-44ca-451c-889b-daf45385d31c", 00:26:07.457 "is_configured": true, 00:26:07.457 "data_offset": 2048, 00:26:07.457 "data_size": 63488 00:26:07.457 }, 00:26:07.457 { 00:26:07.457 "name": "BaseBdev3", 00:26:07.457 "uuid": "f94395e3-f336-497a-a8fd-ba963d43e9ce", 00:26:07.457 "is_configured": true, 00:26:07.457 "data_offset": 2048, 00:26:07.457 "data_size": 63488 00:26:07.457 } 00:26:07.457 ] 00:26:07.457 }' 00:26:07.457 15:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:07.457 15:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.715 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:07.715 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:07.715 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.715 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.715 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.715 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:26:07.715 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:07.715 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.715 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.715 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:07.974 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.974 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 873b1cb2-1ef5-4f26-bdaf-6dd0e20365db 00:26:07.974 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.974 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.974 [2024-11-05 15:55:40.172491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:07.974 [2024-11-05 15:55:40.172650] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:26:07.974 [2024-11-05 15:55:40.172663] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:07.974 [2024-11-05 15:55:40.172881] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:26:07.974 NewBaseBdev 00:26:07.974 [2024-11-05 15:55:40.172984] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:26:07.974 [2024-11-05 15:55:40.172991] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:26:07.974 [2024-11-05 15:55:40.173088] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:07.974 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.974 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:26:07.974 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:26:07.974 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:07.974 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:26:07.974 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:07.974 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:07.974 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:07.974 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.974 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.974 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.974 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:07.974 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.974 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.974 [ 00:26:07.974 { 00:26:07.974 "name": "NewBaseBdev", 00:26:07.974 "aliases": [ 00:26:07.974 "873b1cb2-1ef5-4f26-bdaf-6dd0e20365db" 00:26:07.974 ], 00:26:07.974 "product_name": "Malloc disk", 00:26:07.974 "block_size": 512, 00:26:07.974 "num_blocks": 65536, 00:26:07.974 "uuid": "873b1cb2-1ef5-4f26-bdaf-6dd0e20365db", 00:26:07.974 "assigned_rate_limits": { 00:26:07.974 "rw_ios_per_sec": 0, 00:26:07.974 "rw_mbytes_per_sec": 0, 00:26:07.974 "r_mbytes_per_sec": 0, 00:26:07.974 "w_mbytes_per_sec": 0 00:26:07.974 }, 00:26:07.974 "claimed": true, 00:26:07.974 "claim_type": "exclusive_write", 00:26:07.974 "zoned": false, 00:26:07.974 "supported_io_types": { 00:26:07.974 "read": true, 00:26:07.974 "write": true, 00:26:07.974 "unmap": true, 00:26:07.974 "flush": true, 00:26:07.974 "reset": true, 00:26:07.974 "nvme_admin": false, 00:26:07.974 "nvme_io": false, 00:26:07.974 "nvme_io_md": false, 00:26:07.974 "write_zeroes": true, 00:26:07.974 "zcopy": true, 00:26:07.974 "get_zone_info": false, 00:26:07.974 "zone_management": false, 00:26:07.974 "zone_append": false, 00:26:07.974 "compare": false, 00:26:07.974 "compare_and_write": false, 00:26:07.974 "abort": true, 00:26:07.974 "seek_hole": false, 00:26:07.974 "seek_data": false, 00:26:07.974 "copy": true, 00:26:07.974 "nvme_iov_md": false 00:26:07.974 }, 00:26:07.974 "memory_domains": [ 00:26:07.974 { 00:26:07.974 "dma_device_id": "system", 00:26:07.974 "dma_device_type": 1 00:26:07.974 }, 00:26:07.974 { 00:26:07.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:07.974 "dma_device_type": 2 00:26:07.974 } 00:26:07.974 ], 00:26:07.974 "driver_specific": {} 00:26:07.974 } 00:26:07.974 ] 00:26:07.974 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.974 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:26:07.974 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:26:07.974 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:07.974 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:07.974 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:07.974 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:07.974 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:07.974 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:07.974 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:07.974 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:07.974 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:07.974 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:07.974 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:07.974 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.974 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.974 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.974 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:07.974 "name": "Existed_Raid", 00:26:07.974 "uuid": "8ab9b6b9-bc25-42ec-b1b5-ea56487d17fe", 00:26:07.974 "strip_size_kb": 64, 00:26:07.974 "state": "online", 00:26:07.974 "raid_level": "raid0", 00:26:07.974 "superblock": true, 00:26:07.974 "num_base_bdevs": 3, 00:26:07.974 "num_base_bdevs_discovered": 3, 00:26:07.974 "num_base_bdevs_operational": 3, 00:26:07.974 "base_bdevs_list": [ 00:26:07.974 { 00:26:07.974 "name": "NewBaseBdev", 00:26:07.974 "uuid": "873b1cb2-1ef5-4f26-bdaf-6dd0e20365db", 00:26:07.974 "is_configured": true, 00:26:07.974 "data_offset": 2048, 00:26:07.974 "data_size": 63488 00:26:07.974 }, 00:26:07.974 { 00:26:07.974 "name": "BaseBdev2", 00:26:07.974 "uuid": "9dbddf2c-44ca-451c-889b-daf45385d31c", 00:26:07.974 "is_configured": true, 00:26:07.974 "data_offset": 2048, 00:26:07.974 "data_size": 63488 00:26:07.974 }, 00:26:07.974 { 00:26:07.974 "name": "BaseBdev3", 00:26:07.974 "uuid": "f94395e3-f336-497a-a8fd-ba963d43e9ce", 00:26:07.974 "is_configured": true, 00:26:07.974 "data_offset": 2048, 00:26:07.974 "data_size": 63488 00:26:07.974 } 00:26:07.974 ] 00:26:07.974 }' 00:26:07.974 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:07.974 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.245 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:26:08.245 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:08.245 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:08.245 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:08.245 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:26:08.245 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:08.245 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:08.245 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:08.245 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.245 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.245 [2024-11-05 15:55:40.524869] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:08.245 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.245 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:08.245 "name": "Existed_Raid", 00:26:08.245 "aliases": [ 00:26:08.245 "8ab9b6b9-bc25-42ec-b1b5-ea56487d17fe" 00:26:08.245 ], 00:26:08.245 "product_name": "Raid Volume", 00:26:08.245 "block_size": 512, 00:26:08.245 "num_blocks": 190464, 00:26:08.245 "uuid": "8ab9b6b9-bc25-42ec-b1b5-ea56487d17fe", 00:26:08.245 "assigned_rate_limits": { 00:26:08.245 "rw_ios_per_sec": 0, 00:26:08.245 "rw_mbytes_per_sec": 0, 00:26:08.245 "r_mbytes_per_sec": 0, 00:26:08.245 "w_mbytes_per_sec": 0 00:26:08.245 }, 00:26:08.245 "claimed": false, 00:26:08.245 "zoned": false, 00:26:08.245 "supported_io_types": { 00:26:08.245 "read": true, 00:26:08.245 "write": true, 00:26:08.245 "unmap": true, 00:26:08.245 "flush": true, 00:26:08.245 "reset": true, 00:26:08.245 "nvme_admin": false, 00:26:08.245 "nvme_io": false, 00:26:08.245 "nvme_io_md": false, 00:26:08.245 "write_zeroes": true, 00:26:08.245 "zcopy": false, 00:26:08.245 "get_zone_info": false, 00:26:08.245 "zone_management": false, 00:26:08.245 "zone_append": false, 00:26:08.245 "compare": false, 00:26:08.245 "compare_and_write": false, 00:26:08.245 "abort": false, 00:26:08.245 "seek_hole": false, 00:26:08.245 "seek_data": false, 00:26:08.245 "copy": false, 00:26:08.245 "nvme_iov_md": false 00:26:08.245 }, 00:26:08.245 "memory_domains": [ 00:26:08.245 { 00:26:08.246 "dma_device_id": "system", 00:26:08.246 "dma_device_type": 1 00:26:08.246 }, 00:26:08.246 { 00:26:08.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.246 "dma_device_type": 2 00:26:08.246 }, 00:26:08.246 { 00:26:08.246 "dma_device_id": "system", 00:26:08.246 "dma_device_type": 1 00:26:08.246 }, 00:26:08.246 { 00:26:08.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.246 "dma_device_type": 2 00:26:08.246 }, 00:26:08.246 { 00:26:08.246 "dma_device_id": "system", 00:26:08.246 "dma_device_type": 1 00:26:08.246 }, 00:26:08.246 { 00:26:08.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.246 "dma_device_type": 2 00:26:08.246 } 00:26:08.246 ], 00:26:08.246 "driver_specific": { 00:26:08.246 "raid": { 00:26:08.246 "uuid": "8ab9b6b9-bc25-42ec-b1b5-ea56487d17fe", 00:26:08.246 "strip_size_kb": 64, 00:26:08.246 "state": "online", 00:26:08.246 "raid_level": "raid0", 00:26:08.246 "superblock": true, 00:26:08.246 "num_base_bdevs": 3, 00:26:08.246 "num_base_bdevs_discovered": 3, 00:26:08.246 "num_base_bdevs_operational": 3, 00:26:08.246 "base_bdevs_list": [ 00:26:08.246 { 00:26:08.246 "name": "NewBaseBdev", 00:26:08.246 "uuid": "873b1cb2-1ef5-4f26-bdaf-6dd0e20365db", 00:26:08.246 "is_configured": true, 00:26:08.246 "data_offset": 2048, 00:26:08.246 "data_size": 63488 00:26:08.246 }, 00:26:08.246 { 00:26:08.246 "name": "BaseBdev2", 00:26:08.246 "uuid": "9dbddf2c-44ca-451c-889b-daf45385d31c", 00:26:08.246 "is_configured": true, 00:26:08.246 "data_offset": 2048, 00:26:08.246 "data_size": 63488 00:26:08.246 }, 00:26:08.246 { 00:26:08.246 "name": "BaseBdev3", 00:26:08.246 "uuid": "f94395e3-f336-497a-a8fd-ba963d43e9ce", 00:26:08.246 "is_configured": true, 00:26:08.246 "data_offset": 2048, 00:26:08.246 "data_size": 63488 00:26:08.246 } 00:26:08.246 ] 00:26:08.246 } 00:26:08.246 } 00:26:08.246 }' 00:26:08.246 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:08.246 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:26:08.246 BaseBdev2 00:26:08.246 BaseBdev3' 00:26:08.246 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:08.246 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:08.246 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:08.246 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:26:08.246 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.246 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.246 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:08.246 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.246 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:08.246 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:08.246 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:08.246 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:08.246 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.247 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.247 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:08.247 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.508 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:08.508 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:08.508 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:08.509 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:26:08.509 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.509 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.509 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:08.509 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.509 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:08.509 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:08.509 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:08.509 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.509 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.509 [2024-11-05 15:55:40.708638] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:08.509 [2024-11-05 15:55:40.708663] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:08.509 [2024-11-05 15:55:40.708724] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:08.509 [2024-11-05 15:55:40.708768] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:08.509 [2024-11-05 15:55:40.708778] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:26:08.509 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.509 15:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62820 00:26:08.509 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 62820 ']' 00:26:08.509 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 62820 00:26:08.509 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:26:08.509 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:08.509 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62820 00:26:08.509 killing process with pid 62820 00:26:08.509 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:08.509 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:08.509 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62820' 00:26:08.509 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 62820 00:26:08.509 [2024-11-05 15:55:40.739598] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:08.509 15:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 62820 00:26:08.509 [2024-11-05 15:55:40.886783] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:09.074 15:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:26:09.074 00:26:09.074 real 0m7.464s 00:26:09.074 user 0m12.085s 00:26:09.074 sys 0m1.182s 00:26:09.074 ************************************ 00:26:09.074 END TEST raid_state_function_test_sb 00:26:09.074 ************************************ 00:26:09.074 15:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:09.074 15:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:09.074 15:55:41 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:26:09.074 15:55:41 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:26:09.074 15:55:41 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:09.074 15:55:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:09.332 ************************************ 00:26:09.332 START TEST raid_superblock_test 00:26:09.332 ************************************ 00:26:09.332 15:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 3 00:26:09.332 15:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:26:09.332 15:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:26:09.332 15:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:26:09.332 15:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:26:09.332 15:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:26:09.332 15:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:26:09.332 15:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:26:09.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:09.332 15:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:26:09.332 15:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:26:09.332 15:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:26:09.332 15:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:26:09.332 15:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:26:09.332 15:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:26:09.332 15:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:26:09.332 15:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:26:09.332 15:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:26:09.332 15:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63407 00:26:09.332 15:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63407 00:26:09.332 15:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 63407 ']' 00:26:09.332 15:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:09.332 15:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:09.332 15:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:09.332 15:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:09.332 15:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.332 15:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:26:09.332 [2024-11-05 15:55:41.546970] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:26:09.332 [2024-11-05 15:55:41.547059] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63407 ] 00:26:09.332 [2024-11-05 15:55:41.696264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.591 [2024-11-05 15:55:41.776188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.591 [2024-11-05 15:55:41.885713] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:09.591 [2024-11-05 15:55:41.885868] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:10.203 15:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:10.203 15:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:26:10.203 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:26:10.203 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:10.203 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:26:10.203 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:26:10.203 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:10.203 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:10.203 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:10.203 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:10.203 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:26:10.203 15:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.203 15:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.203 malloc1 00:26:10.203 15:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.203 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:10.203 15:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.203 15:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.203 [2024-11-05 15:55:42.426522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:10.203 [2024-11-05 15:55:42.426573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:10.203 [2024-11-05 15:55:42.426590] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:26:10.203 [2024-11-05 15:55:42.426597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:10.203 [2024-11-05 15:55:42.428342] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:10.203 [2024-11-05 15:55:42.428373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:10.203 pt1 00:26:10.203 15:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.203 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:10.203 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:10.203 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:26:10.203 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:26:10.203 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:10.203 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:10.203 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:10.203 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:10.203 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:26:10.203 15:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.203 15:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.203 malloc2 00:26:10.203 15:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.203 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:10.203 15:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.203 15:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.203 [2024-11-05 15:55:42.457945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:10.203 [2024-11-05 15:55:42.457989] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:10.203 [2024-11-05 15:55:42.458006] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:26:10.203 [2024-11-05 15:55:42.458012] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:10.203 [2024-11-05 15:55:42.459710] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:10.203 [2024-11-05 15:55:42.459740] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:10.203 pt2 00:26:10.203 15:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.204 malloc3 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.204 [2024-11-05 15:55:42.502615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:10.204 [2024-11-05 15:55:42.502657] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:10.204 [2024-11-05 15:55:42.502674] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:26:10.204 [2024-11-05 15:55:42.502681] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:10.204 [2024-11-05 15:55:42.504396] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:10.204 [2024-11-05 15:55:42.504522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:10.204 pt3 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.204 [2024-11-05 15:55:42.510663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:10.204 [2024-11-05 15:55:42.512262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:10.204 [2024-11-05 15:55:42.512378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:10.204 [2024-11-05 15:55:42.513325] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:26:10.204 [2024-11-05 15:55:42.513608] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:10.204 [2024-11-05 15:55:42.514605] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:26:10.204 [2024-11-05 15:55:42.515210] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:26:10.204 [2024-11-05 15:55:42.515392] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:26:10.204 [2024-11-05 15:55:42.516033] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:10.204 "name": "raid_bdev1", 00:26:10.204 "uuid": "6b10365c-1afc-4c09-80a3-7eeb28cb1c1c", 00:26:10.204 "strip_size_kb": 64, 00:26:10.204 "state": "online", 00:26:10.204 "raid_level": "raid0", 00:26:10.204 "superblock": true, 00:26:10.204 "num_base_bdevs": 3, 00:26:10.204 "num_base_bdevs_discovered": 3, 00:26:10.204 "num_base_bdevs_operational": 3, 00:26:10.204 "base_bdevs_list": [ 00:26:10.204 { 00:26:10.204 "name": "pt1", 00:26:10.204 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:10.204 "is_configured": true, 00:26:10.204 "data_offset": 2048, 00:26:10.204 "data_size": 63488 00:26:10.204 }, 00:26:10.204 { 00:26:10.204 "name": "pt2", 00:26:10.204 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:10.204 "is_configured": true, 00:26:10.204 "data_offset": 2048, 00:26:10.204 "data_size": 63488 00:26:10.204 }, 00:26:10.204 { 00:26:10.204 "name": "pt3", 00:26:10.204 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:10.204 "is_configured": true, 00:26:10.204 "data_offset": 2048, 00:26:10.204 "data_size": 63488 00:26:10.204 } 00:26:10.204 ] 00:26:10.204 }' 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:10.204 15:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.463 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:26:10.463 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:26:10.463 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:10.463 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:10.463 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:10.463 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:10.463 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:10.463 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:10.463 15:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.463 15:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.463 [2024-11-05 15:55:42.836110] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:10.463 15:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.463 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:10.463 "name": "raid_bdev1", 00:26:10.463 "aliases": [ 00:26:10.463 "6b10365c-1afc-4c09-80a3-7eeb28cb1c1c" 00:26:10.463 ], 00:26:10.463 "product_name": "Raid Volume", 00:26:10.463 "block_size": 512, 00:26:10.463 "num_blocks": 190464, 00:26:10.463 "uuid": "6b10365c-1afc-4c09-80a3-7eeb28cb1c1c", 00:26:10.463 "assigned_rate_limits": { 00:26:10.463 "rw_ios_per_sec": 0, 00:26:10.463 "rw_mbytes_per_sec": 0, 00:26:10.463 "r_mbytes_per_sec": 0, 00:26:10.463 "w_mbytes_per_sec": 0 00:26:10.463 }, 00:26:10.463 "claimed": false, 00:26:10.463 "zoned": false, 00:26:10.463 "supported_io_types": { 00:26:10.463 "read": true, 00:26:10.464 "write": true, 00:26:10.464 "unmap": true, 00:26:10.464 "flush": true, 00:26:10.464 "reset": true, 00:26:10.464 "nvme_admin": false, 00:26:10.464 "nvme_io": false, 00:26:10.464 "nvme_io_md": false, 00:26:10.464 "write_zeroes": true, 00:26:10.464 "zcopy": false, 00:26:10.464 "get_zone_info": false, 00:26:10.464 "zone_management": false, 00:26:10.464 "zone_append": false, 00:26:10.464 "compare": false, 00:26:10.464 "compare_and_write": false, 00:26:10.464 "abort": false, 00:26:10.464 "seek_hole": false, 00:26:10.464 "seek_data": false, 00:26:10.464 "copy": false, 00:26:10.464 "nvme_iov_md": false 00:26:10.464 }, 00:26:10.464 "memory_domains": [ 00:26:10.464 { 00:26:10.464 "dma_device_id": "system", 00:26:10.464 "dma_device_type": 1 00:26:10.464 }, 00:26:10.464 { 00:26:10.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:10.464 "dma_device_type": 2 00:26:10.464 }, 00:26:10.464 { 00:26:10.464 "dma_device_id": "system", 00:26:10.464 "dma_device_type": 1 00:26:10.464 }, 00:26:10.464 { 00:26:10.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:10.464 "dma_device_type": 2 00:26:10.464 }, 00:26:10.464 { 00:26:10.464 "dma_device_id": "system", 00:26:10.464 "dma_device_type": 1 00:26:10.464 }, 00:26:10.464 { 00:26:10.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:10.464 "dma_device_type": 2 00:26:10.464 } 00:26:10.464 ], 00:26:10.464 "driver_specific": { 00:26:10.464 "raid": { 00:26:10.464 "uuid": "6b10365c-1afc-4c09-80a3-7eeb28cb1c1c", 00:26:10.464 "strip_size_kb": 64, 00:26:10.464 "state": "online", 00:26:10.464 "raid_level": "raid0", 00:26:10.464 "superblock": true, 00:26:10.464 "num_base_bdevs": 3, 00:26:10.464 "num_base_bdevs_discovered": 3, 00:26:10.464 "num_base_bdevs_operational": 3, 00:26:10.464 "base_bdevs_list": [ 00:26:10.464 { 00:26:10.464 "name": "pt1", 00:26:10.464 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:10.464 "is_configured": true, 00:26:10.464 "data_offset": 2048, 00:26:10.464 "data_size": 63488 00:26:10.464 }, 00:26:10.464 { 00:26:10.464 "name": "pt2", 00:26:10.464 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:10.464 "is_configured": true, 00:26:10.464 "data_offset": 2048, 00:26:10.464 "data_size": 63488 00:26:10.464 }, 00:26:10.464 { 00:26:10.464 "name": "pt3", 00:26:10.464 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:10.464 "is_configured": true, 00:26:10.464 "data_offset": 2048, 00:26:10.464 "data_size": 63488 00:26:10.464 } 00:26:10.464 ] 00:26:10.464 } 00:26:10.464 } 00:26:10.464 }' 00:26:10.464 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:10.723 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:26:10.723 pt2 00:26:10.723 pt3' 00:26:10.723 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:10.723 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:10.723 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:10.723 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:26:10.723 15:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.723 15:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.723 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:10.723 15:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.723 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:10.723 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:10.723 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:10.723 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:10.723 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:26:10.723 15:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.723 15:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.723 15:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.723 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:10.723 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:10.723 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:10.723 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:26:10.723 15:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.723 15:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.723 15:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:10.723 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.723 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:10.723 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:10.723 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:10.723 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:26:10.723 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.723 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.723 [2024-11-05 15:55:43.032087] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:10.723 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.723 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6b10365c-1afc-4c09-80a3-7eeb28cb1c1c 00:26:10.723 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6b10365c-1afc-4c09-80a3-7eeb28cb1c1c ']' 00:26:10.723 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:10.723 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.723 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.723 [2024-11-05 15:55:43.063805] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:10.723 [2024-11-05 15:55:43.063947] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:10.723 [2024-11-05 15:55:43.064020] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:10.723 [2024-11-05 15:55:43.064085] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:10.723 [2024-11-05 15:55:43.064095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:26:10.723 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.723 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:10.723 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.723 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.723 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:26:10.723 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.723 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:26:10.723 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:26:10.723 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:10.723 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:26:10.723 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.723 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.723 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.723 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:10.723 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:26:10.723 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.723 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.723 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.723 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:10.723 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:26:10.723 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.723 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.723 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.723 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:10.982 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:26:10.982 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.983 [2024-11-05 15:55:43.175873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:10.983 [2024-11-05 15:55:43.177701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:10.983 [2024-11-05 15:55:43.177750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:26:10.983 [2024-11-05 15:55:43.177793] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:26:10.983 [2024-11-05 15:55:43.177837] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:26:10.983 [2024-11-05 15:55:43.177874] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:26:10.983 [2024-11-05 15:55:43.177890] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:10.983 [2024-11-05 15:55:43.177901] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:26:10.983 request: 00:26:10.983 { 00:26:10.983 "name": "raid_bdev1", 00:26:10.983 "raid_level": "raid0", 00:26:10.983 "base_bdevs": [ 00:26:10.983 "malloc1", 00:26:10.983 "malloc2", 00:26:10.983 "malloc3" 00:26:10.983 ], 00:26:10.983 "strip_size_kb": 64, 00:26:10.983 "superblock": false, 00:26:10.983 "method": "bdev_raid_create", 00:26:10.983 "req_id": 1 00:26:10.983 } 00:26:10.983 Got JSON-RPC error response 00:26:10.983 response: 00:26:10.983 { 00:26:10.983 "code": -17, 00:26:10.983 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:10.983 } 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.983 [2024-11-05 15:55:43.219830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:10.983 [2024-11-05 15:55:43.219879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:10.983 [2024-11-05 15:55:43.219895] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:26:10.983 [2024-11-05 15:55:43.219903] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:10.983 [2024-11-05 15:55:43.222013] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:10.983 [2024-11-05 15:55:43.222043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:10.983 [2024-11-05 15:55:43.222106] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:10.983 [2024-11-05 15:55:43.222150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:10.983 pt1 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:10.983 "name": "raid_bdev1", 00:26:10.983 "uuid": "6b10365c-1afc-4c09-80a3-7eeb28cb1c1c", 00:26:10.983 "strip_size_kb": 64, 00:26:10.983 "state": "configuring", 00:26:10.983 "raid_level": "raid0", 00:26:10.983 "superblock": true, 00:26:10.983 "num_base_bdevs": 3, 00:26:10.983 "num_base_bdevs_discovered": 1, 00:26:10.983 "num_base_bdevs_operational": 3, 00:26:10.983 "base_bdevs_list": [ 00:26:10.983 { 00:26:10.983 "name": "pt1", 00:26:10.983 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:10.983 "is_configured": true, 00:26:10.983 "data_offset": 2048, 00:26:10.983 "data_size": 63488 00:26:10.983 }, 00:26:10.983 { 00:26:10.983 "name": null, 00:26:10.983 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:10.983 "is_configured": false, 00:26:10.983 "data_offset": 2048, 00:26:10.983 "data_size": 63488 00:26:10.983 }, 00:26:10.983 { 00:26:10.983 "name": null, 00:26:10.983 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:10.983 "is_configured": false, 00:26:10.983 "data_offset": 2048, 00:26:10.983 "data_size": 63488 00:26:10.983 } 00:26:10.983 ] 00:26:10.983 }' 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:10.983 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.243 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:26:11.243 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:11.243 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.243 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.243 [2024-11-05 15:55:43.531948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:11.243 [2024-11-05 15:55:43.532007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:11.243 [2024-11-05 15:55:43.532027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:26:11.243 [2024-11-05 15:55:43.532036] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:11.243 [2024-11-05 15:55:43.532436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:11.243 [2024-11-05 15:55:43.532450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:11.243 [2024-11-05 15:55:43.532522] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:11.243 [2024-11-05 15:55:43.532542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:11.243 pt2 00:26:11.243 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.243 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:26:11.243 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.243 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.243 [2024-11-05 15:55:43.539957] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:26:11.243 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.243 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:26:11.243 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:11.243 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:11.243 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:11.243 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:11.243 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:11.243 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:11.243 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:11.243 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:11.243 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:11.243 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:11.243 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:11.243 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.243 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.243 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.243 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:11.243 "name": "raid_bdev1", 00:26:11.243 "uuid": "6b10365c-1afc-4c09-80a3-7eeb28cb1c1c", 00:26:11.243 "strip_size_kb": 64, 00:26:11.243 "state": "configuring", 00:26:11.243 "raid_level": "raid0", 00:26:11.243 "superblock": true, 00:26:11.243 "num_base_bdevs": 3, 00:26:11.243 "num_base_bdevs_discovered": 1, 00:26:11.243 "num_base_bdevs_operational": 3, 00:26:11.243 "base_bdevs_list": [ 00:26:11.243 { 00:26:11.243 "name": "pt1", 00:26:11.243 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:11.243 "is_configured": true, 00:26:11.243 "data_offset": 2048, 00:26:11.243 "data_size": 63488 00:26:11.243 }, 00:26:11.243 { 00:26:11.243 "name": null, 00:26:11.243 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:11.243 "is_configured": false, 00:26:11.243 "data_offset": 0, 00:26:11.243 "data_size": 63488 00:26:11.243 }, 00:26:11.243 { 00:26:11.243 "name": null, 00:26:11.243 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:11.243 "is_configured": false, 00:26:11.243 "data_offset": 2048, 00:26:11.243 "data_size": 63488 00:26:11.243 } 00:26:11.243 ] 00:26:11.243 }' 00:26:11.243 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:11.243 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.502 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:26:11.502 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:11.502 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:11.502 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.502 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.502 [2024-11-05 15:55:43.847994] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:11.502 [2024-11-05 15:55:43.848050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:11.502 [2024-11-05 15:55:43.848065] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:26:11.502 [2024-11-05 15:55:43.848076] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:11.502 [2024-11-05 15:55:43.848484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:11.502 [2024-11-05 15:55:43.848499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:11.502 [2024-11-05 15:55:43.848568] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:11.502 [2024-11-05 15:55:43.848588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:11.502 pt2 00:26:11.502 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.502 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:26:11.502 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:11.502 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:11.502 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.502 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.502 [2024-11-05 15:55:43.855987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:11.502 [2024-11-05 15:55:43.856026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:11.502 [2024-11-05 15:55:43.856038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:26:11.502 [2024-11-05 15:55:43.856048] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:11.502 [2024-11-05 15:55:43.856384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:11.502 [2024-11-05 15:55:43.856406] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:11.502 [2024-11-05 15:55:43.856457] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:26:11.502 [2024-11-05 15:55:43.856475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:11.502 [2024-11-05 15:55:43.856581] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:11.502 [2024-11-05 15:55:43.856596] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:11.502 [2024-11-05 15:55:43.856821] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:26:11.502 [2024-11-05 15:55:43.856967] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:11.502 [2024-11-05 15:55:43.856976] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:26:11.502 [2024-11-05 15:55:43.857095] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:11.502 pt3 00:26:11.502 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.502 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:26:11.502 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:11.502 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:26:11.502 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:11.502 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:11.502 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:11.502 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:11.502 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:11.502 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:11.502 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:11.502 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:11.502 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:11.502 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:11.502 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:11.502 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.502 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.502 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.502 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:11.502 "name": "raid_bdev1", 00:26:11.502 "uuid": "6b10365c-1afc-4c09-80a3-7eeb28cb1c1c", 00:26:11.502 "strip_size_kb": 64, 00:26:11.502 "state": "online", 00:26:11.502 "raid_level": "raid0", 00:26:11.502 "superblock": true, 00:26:11.502 "num_base_bdevs": 3, 00:26:11.502 "num_base_bdevs_discovered": 3, 00:26:11.502 "num_base_bdevs_operational": 3, 00:26:11.502 "base_bdevs_list": [ 00:26:11.502 { 00:26:11.502 "name": "pt1", 00:26:11.502 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:11.502 "is_configured": true, 00:26:11.502 "data_offset": 2048, 00:26:11.502 "data_size": 63488 00:26:11.502 }, 00:26:11.502 { 00:26:11.502 "name": "pt2", 00:26:11.502 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:11.502 "is_configured": true, 00:26:11.502 "data_offset": 2048, 00:26:11.502 "data_size": 63488 00:26:11.502 }, 00:26:11.502 { 00:26:11.502 "name": "pt3", 00:26:11.502 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:11.502 "is_configured": true, 00:26:11.502 "data_offset": 2048, 00:26:11.502 "data_size": 63488 00:26:11.502 } 00:26:11.502 ] 00:26:11.502 }' 00:26:11.502 15:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:11.502 15:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.075 15:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:26:12.075 15:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:26:12.075 15:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:12.075 15:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:12.075 15:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:12.075 15:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:12.075 15:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:12.075 15:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.075 15:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:12.075 15:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.075 [2024-11-05 15:55:44.188399] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:12.075 15:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.075 15:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:12.075 "name": "raid_bdev1", 00:26:12.075 "aliases": [ 00:26:12.075 "6b10365c-1afc-4c09-80a3-7eeb28cb1c1c" 00:26:12.075 ], 00:26:12.075 "product_name": "Raid Volume", 00:26:12.075 "block_size": 512, 00:26:12.075 "num_blocks": 190464, 00:26:12.075 "uuid": "6b10365c-1afc-4c09-80a3-7eeb28cb1c1c", 00:26:12.075 "assigned_rate_limits": { 00:26:12.075 "rw_ios_per_sec": 0, 00:26:12.075 "rw_mbytes_per_sec": 0, 00:26:12.075 "r_mbytes_per_sec": 0, 00:26:12.075 "w_mbytes_per_sec": 0 00:26:12.075 }, 00:26:12.075 "claimed": false, 00:26:12.075 "zoned": false, 00:26:12.075 "supported_io_types": { 00:26:12.075 "read": true, 00:26:12.075 "write": true, 00:26:12.075 "unmap": true, 00:26:12.075 "flush": true, 00:26:12.075 "reset": true, 00:26:12.075 "nvme_admin": false, 00:26:12.075 "nvme_io": false, 00:26:12.075 "nvme_io_md": false, 00:26:12.075 "write_zeroes": true, 00:26:12.075 "zcopy": false, 00:26:12.075 "get_zone_info": false, 00:26:12.075 "zone_management": false, 00:26:12.075 "zone_append": false, 00:26:12.075 "compare": false, 00:26:12.075 "compare_and_write": false, 00:26:12.075 "abort": false, 00:26:12.075 "seek_hole": false, 00:26:12.075 "seek_data": false, 00:26:12.075 "copy": false, 00:26:12.075 "nvme_iov_md": false 00:26:12.075 }, 00:26:12.075 "memory_domains": [ 00:26:12.075 { 00:26:12.075 "dma_device_id": "system", 00:26:12.075 "dma_device_type": 1 00:26:12.075 }, 00:26:12.075 { 00:26:12.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:12.075 "dma_device_type": 2 00:26:12.075 }, 00:26:12.075 { 00:26:12.075 "dma_device_id": "system", 00:26:12.075 "dma_device_type": 1 00:26:12.075 }, 00:26:12.075 { 00:26:12.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:12.075 "dma_device_type": 2 00:26:12.075 }, 00:26:12.075 { 00:26:12.075 "dma_device_id": "system", 00:26:12.075 "dma_device_type": 1 00:26:12.075 }, 00:26:12.075 { 00:26:12.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:12.075 "dma_device_type": 2 00:26:12.075 } 00:26:12.075 ], 00:26:12.075 "driver_specific": { 00:26:12.075 "raid": { 00:26:12.075 "uuid": "6b10365c-1afc-4c09-80a3-7eeb28cb1c1c", 00:26:12.075 "strip_size_kb": 64, 00:26:12.075 "state": "online", 00:26:12.075 "raid_level": "raid0", 00:26:12.075 "superblock": true, 00:26:12.075 "num_base_bdevs": 3, 00:26:12.075 "num_base_bdevs_discovered": 3, 00:26:12.075 "num_base_bdevs_operational": 3, 00:26:12.075 "base_bdevs_list": [ 00:26:12.075 { 00:26:12.075 "name": "pt1", 00:26:12.075 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:12.075 "is_configured": true, 00:26:12.075 "data_offset": 2048, 00:26:12.075 "data_size": 63488 00:26:12.075 }, 00:26:12.075 { 00:26:12.075 "name": "pt2", 00:26:12.075 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:12.075 "is_configured": true, 00:26:12.075 "data_offset": 2048, 00:26:12.075 "data_size": 63488 00:26:12.075 }, 00:26:12.075 { 00:26:12.075 "name": "pt3", 00:26:12.075 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:12.075 "is_configured": true, 00:26:12.075 "data_offset": 2048, 00:26:12.075 "data_size": 63488 00:26:12.075 } 00:26:12.075 ] 00:26:12.075 } 00:26:12.075 } 00:26:12.075 }' 00:26:12.075 15:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:12.075 15:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:26:12.075 pt2 00:26:12.076 pt3' 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.076 [2024-11-05 15:55:44.384397] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6b10365c-1afc-4c09-80a3-7eeb28cb1c1c '!=' 6b10365c-1afc-4c09-80a3-7eeb28cb1c1c ']' 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63407 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 63407 ']' 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 63407 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63407 00:26:12.076 killing process with pid 63407 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63407' 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 63407 00:26:12.076 [2024-11-05 15:55:44.438287] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:12.076 [2024-11-05 15:55:44.438365] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:12.076 15:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 63407 00:26:12.076 [2024-11-05 15:55:44.438420] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:12.076 [2024-11-05 15:55:44.438432] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:26:12.363 [2024-11-05 15:55:44.624150] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:12.931 15:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:26:12.931 00:26:12.931 real 0m3.801s 00:26:12.931 user 0m5.497s 00:26:12.931 sys 0m0.582s 00:26:12.931 15:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:12.931 15:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.931 ************************************ 00:26:12.931 END TEST raid_superblock_test 00:26:12.931 ************************************ 00:26:12.931 15:55:45 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:26:12.931 15:55:45 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:26:12.931 15:55:45 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:12.931 15:55:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:12.931 ************************************ 00:26:12.931 START TEST raid_read_error_test 00:26:12.931 ************************************ 00:26:12.931 15:55:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 read 00:26:12.931 15:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:26:12.931 15:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:26:12.931 15:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:26:12.931 15:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:26:12.931 15:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:12.931 15:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:26:12.931 15:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:12.931 15:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:12.931 15:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:26:12.931 15:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:12.931 15:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:12.931 15:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:26:12.931 15:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:12.931 15:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:13.190 15:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:26:13.190 15:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:26:13.190 15:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:26:13.190 15:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:26:13.190 15:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:26:13.190 15:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:26:13.190 15:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:26:13.190 15:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:26:13.190 15:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:26:13.190 15:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:26:13.190 15:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:26:13.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.190 15:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.TYnIdkri7R 00:26:13.190 15:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63649 00:26:13.190 15:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63649 00:26:13.190 15:55:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 63649 ']' 00:26:13.190 15:55:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.190 15:55:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:13.190 15:55:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.190 15:55:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:13.190 15:55:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.190 15:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:26:13.190 [2024-11-05 15:55:45.412003] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:26:13.190 [2024-11-05 15:55:45.412122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63649 ] 00:26:13.190 [2024-11-05 15:55:45.567947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.449 [2024-11-05 15:55:45.653233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.449 [2024-11-05 15:55:45.762899] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:13.449 [2024-11-05 15:55:45.763043] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:14.016 15:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:14.016 15:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:26:14.016 15:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:14.016 15:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:14.016 15:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.016 15:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.016 BaseBdev1_malloc 00:26:14.016 15:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.016 15:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:26:14.016 15:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.016 15:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.016 true 00:26:14.016 15:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.016 15:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:26:14.016 15:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.016 15:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.016 [2024-11-05 15:55:46.288379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:26:14.016 [2024-11-05 15:55:46.288425] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:14.016 [2024-11-05 15:55:46.288441] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:26:14.016 [2024-11-05 15:55:46.288450] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:14.016 [2024-11-05 15:55:46.290170] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:14.016 [2024-11-05 15:55:46.290334] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:14.016 BaseBdev1 00:26:14.016 15:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.016 15:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:14.016 15:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:14.016 15:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.016 15:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.016 BaseBdev2_malloc 00:26:14.016 15:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.016 15:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:26:14.016 15:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.016 15:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.016 true 00:26:14.016 15:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.016 15:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:26:14.016 15:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.016 15:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.017 [2024-11-05 15:55:46.327588] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:26:14.017 [2024-11-05 15:55:46.327625] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:14.017 [2024-11-05 15:55:46.327637] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:26:14.017 [2024-11-05 15:55:46.327645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:14.017 [2024-11-05 15:55:46.329332] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:14.017 [2024-11-05 15:55:46.329448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:14.017 BaseBdev2 00:26:14.017 15:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.017 15:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:14.017 15:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:14.017 15:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.017 15:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.017 BaseBdev3_malloc 00:26:14.017 15:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.017 15:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:26:14.017 15:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.017 15:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.017 true 00:26:14.017 15:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.017 15:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:26:14.017 15:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.017 15:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.017 [2024-11-05 15:55:46.386544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:26:14.017 [2024-11-05 15:55:46.386584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:14.017 [2024-11-05 15:55:46.386597] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:26:14.017 [2024-11-05 15:55:46.386606] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:14.017 [2024-11-05 15:55:46.388326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:14.017 [2024-11-05 15:55:46.388443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:14.017 BaseBdev3 00:26:14.017 15:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.017 15:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:26:14.017 15:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.017 15:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.017 [2024-11-05 15:55:46.394600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:14.017 [2024-11-05 15:55:46.396180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:14.017 [2024-11-05 15:55:46.396301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:14.017 [2024-11-05 15:55:46.396504] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:26:14.017 [2024-11-05 15:55:46.396566] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:14.017 [2024-11-05 15:55:46.396783] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:26:14.017 [2024-11-05 15:55:46.396957] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:26:14.017 [2024-11-05 15:55:46.397019] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:26:14.017 [2024-11-05 15:55:46.397183] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:14.017 15:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.017 15:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:26:14.017 15:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:14.017 15:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:14.017 15:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:14.017 15:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:14.017 15:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:14.017 15:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:14.017 15:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:14.017 15:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:14.017 15:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:14.017 15:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:14.017 15:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:14.017 15:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.017 15:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.017 15:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.275 15:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:14.275 "name": "raid_bdev1", 00:26:14.275 "uuid": "67250e8f-655b-4f5e-bccc-ad110f2ae4b1", 00:26:14.275 "strip_size_kb": 64, 00:26:14.275 "state": "online", 00:26:14.275 "raid_level": "raid0", 00:26:14.275 "superblock": true, 00:26:14.275 "num_base_bdevs": 3, 00:26:14.275 "num_base_bdevs_discovered": 3, 00:26:14.275 "num_base_bdevs_operational": 3, 00:26:14.275 "base_bdevs_list": [ 00:26:14.275 { 00:26:14.275 "name": "BaseBdev1", 00:26:14.275 "uuid": "09e4c470-cd8c-5e6c-8407-02dd0b89cf2a", 00:26:14.275 "is_configured": true, 00:26:14.275 "data_offset": 2048, 00:26:14.275 "data_size": 63488 00:26:14.275 }, 00:26:14.275 { 00:26:14.275 "name": "BaseBdev2", 00:26:14.275 "uuid": "beec8f52-3ea8-5bb0-9045-3f185b285b90", 00:26:14.275 "is_configured": true, 00:26:14.275 "data_offset": 2048, 00:26:14.275 "data_size": 63488 00:26:14.275 }, 00:26:14.276 { 00:26:14.276 "name": "BaseBdev3", 00:26:14.276 "uuid": "f60dd105-4215-5781-92fb-6e8805a33993", 00:26:14.276 "is_configured": true, 00:26:14.276 "data_offset": 2048, 00:26:14.276 "data_size": 63488 00:26:14.276 } 00:26:14.276 ] 00:26:14.276 }' 00:26:14.276 15:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:14.276 15:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.534 15:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:26:14.534 15:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:26:14.534 [2024-11-05 15:55:46.827475] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:26:15.468 15:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:26:15.468 15:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.468 15:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.468 15:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.468 15:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:26:15.468 15:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:26:15.468 15:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:26:15.468 15:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:26:15.468 15:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:15.468 15:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:15.468 15:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:15.468 15:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:15.468 15:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:15.468 15:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:15.468 15:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:15.468 15:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:15.468 15:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:15.468 15:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:15.468 15:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.468 15:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:15.468 15:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.468 15:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.468 15:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:15.468 "name": "raid_bdev1", 00:26:15.468 "uuid": "67250e8f-655b-4f5e-bccc-ad110f2ae4b1", 00:26:15.468 "strip_size_kb": 64, 00:26:15.468 "state": "online", 00:26:15.468 "raid_level": "raid0", 00:26:15.468 "superblock": true, 00:26:15.468 "num_base_bdevs": 3, 00:26:15.468 "num_base_bdevs_discovered": 3, 00:26:15.468 "num_base_bdevs_operational": 3, 00:26:15.468 "base_bdevs_list": [ 00:26:15.468 { 00:26:15.468 "name": "BaseBdev1", 00:26:15.468 "uuid": "09e4c470-cd8c-5e6c-8407-02dd0b89cf2a", 00:26:15.468 "is_configured": true, 00:26:15.468 "data_offset": 2048, 00:26:15.468 "data_size": 63488 00:26:15.468 }, 00:26:15.468 { 00:26:15.468 "name": "BaseBdev2", 00:26:15.468 "uuid": "beec8f52-3ea8-5bb0-9045-3f185b285b90", 00:26:15.468 "is_configured": true, 00:26:15.468 "data_offset": 2048, 00:26:15.468 "data_size": 63488 00:26:15.468 }, 00:26:15.468 { 00:26:15.468 "name": "BaseBdev3", 00:26:15.468 "uuid": "f60dd105-4215-5781-92fb-6e8805a33993", 00:26:15.468 "is_configured": true, 00:26:15.468 "data_offset": 2048, 00:26:15.468 "data_size": 63488 00:26:15.468 } 00:26:15.468 ] 00:26:15.468 }' 00:26:15.468 15:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:15.468 15:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.726 15:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:15.726 15:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.726 15:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.726 [2024-11-05 15:55:48.056826] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:15.726 [2024-11-05 15:55:48.056863] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:15.726 [2024-11-05 15:55:48.059229] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:15.726 [2024-11-05 15:55:48.059268] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:15.726 [2024-11-05 15:55:48.059299] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:15.726 [2024-11-05 15:55:48.059307] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:26:15.726 { 00:26:15.726 "results": [ 00:26:15.726 { 00:26:15.726 "job": "raid_bdev1", 00:26:15.726 "core_mask": "0x1", 00:26:15.726 "workload": "randrw", 00:26:15.726 "percentage": 50, 00:26:15.726 "status": "finished", 00:26:15.726 "queue_depth": 1, 00:26:15.726 "io_size": 131072, 00:26:15.726 "runtime": 1.227729, 00:26:15.726 "iops": 17525.854647076023, 00:26:15.726 "mibps": 2190.731830884503, 00:26:15.726 "io_failed": 1, 00:26:15.726 "io_timeout": 0, 00:26:15.726 "avg_latency_us": 78.22103455425511, 00:26:15.726 "min_latency_us": 25.796923076923076, 00:26:15.726 "max_latency_us": 1840.0492307692307 00:26:15.726 } 00:26:15.726 ], 00:26:15.726 "core_count": 1 00:26:15.726 } 00:26:15.726 15:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.726 15:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63649 00:26:15.726 15:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 63649 ']' 00:26:15.726 15:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 63649 00:26:15.726 15:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:26:15.726 15:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:15.726 15:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63649 00:26:15.726 killing process with pid 63649 00:26:15.726 15:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:15.726 15:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:15.726 15:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63649' 00:26:15.726 15:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 63649 00:26:15.726 [2024-11-05 15:55:48.089730] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:15.726 15:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 63649 00:26:15.984 [2024-11-05 15:55:48.201113] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:16.550 15:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:26:16.550 15:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.TYnIdkri7R 00:26:16.550 15:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:26:16.550 15:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.81 00:26:16.550 15:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:26:16.550 15:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:16.550 15:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:26:16.550 15:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.81 != \0\.\0\0 ]] 00:26:16.550 00:26:16.550 real 0m3.453s 00:26:16.550 user 0m4.165s 00:26:16.550 sys 0m0.361s 00:26:16.550 ************************************ 00:26:16.550 END TEST raid_read_error_test 00:26:16.550 ************************************ 00:26:16.550 15:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:16.550 15:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.550 15:55:48 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:26:16.550 15:55:48 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:26:16.550 15:55:48 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:16.550 15:55:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:16.550 ************************************ 00:26:16.550 START TEST raid_write_error_test 00:26:16.550 ************************************ 00:26:16.550 15:55:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 write 00:26:16.550 15:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:26:16.550 15:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:26:16.550 15:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:26:16.550 15:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:26:16.550 15:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:16.550 15:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:26:16.550 15:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:16.550 15:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:16.550 15:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:26:16.550 15:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:16.550 15:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:16.550 15:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:26:16.550 15:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:16.550 15:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:16.550 15:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:26:16.550 15:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:26:16.550 15:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:26:16.550 15:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:26:16.550 15:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:26:16.550 15:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:26:16.550 15:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:26:16.550 15:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:26:16.550 15:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:26:16.550 15:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:26:16.550 15:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:26:16.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:16.550 15:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.61n3o855i7 00:26:16.550 15:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63784 00:26:16.550 15:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63784 00:26:16.550 15:55:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 63784 ']' 00:26:16.550 15:55:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:16.550 15:55:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:16.550 15:55:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:16.550 15:55:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:16.550 15:55:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.550 15:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:26:16.550 [2024-11-05 15:55:48.905731] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:26:16.550 [2024-11-05 15:55:48.905981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63784 ] 00:26:16.808 [2024-11-05 15:55:49.064775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.808 [2024-11-05 15:55:49.163195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.065 [2024-11-05 15:55:49.297230] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:17.065 [2024-11-05 15:55:49.297273] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:17.636 15:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:17.636 15:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:26:17.636 15:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:17.636 15:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:17.636 15:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.636 15:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.636 BaseBdev1_malloc 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.637 true 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.637 [2024-11-05 15:55:49.796083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:26:17.637 [2024-11-05 15:55:49.796134] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:17.637 [2024-11-05 15:55:49.796151] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:26:17.637 [2024-11-05 15:55:49.796162] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:17.637 [2024-11-05 15:55:49.798258] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:17.637 [2024-11-05 15:55:49.798293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:17.637 BaseBdev1 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.637 BaseBdev2_malloc 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.637 true 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.637 [2024-11-05 15:55:49.843691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:26:17.637 [2024-11-05 15:55:49.843737] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:17.637 [2024-11-05 15:55:49.843753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:26:17.637 [2024-11-05 15:55:49.843764] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:17.637 [2024-11-05 15:55:49.845828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:17.637 [2024-11-05 15:55:49.845871] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:17.637 BaseBdev2 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.637 BaseBdev3_malloc 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.637 true 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.637 [2024-11-05 15:55:49.900257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:26:17.637 [2024-11-05 15:55:49.900307] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:17.637 [2024-11-05 15:55:49.900324] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:26:17.637 [2024-11-05 15:55:49.900334] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:17.637 [2024-11-05 15:55:49.902425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:17.637 [2024-11-05 15:55:49.902569] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:17.637 BaseBdev3 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.637 [2024-11-05 15:55:49.908327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:17.637 [2024-11-05 15:55:49.910139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:17.637 [2024-11-05 15:55:49.910214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:17.637 [2024-11-05 15:55:49.910422] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:26:17.637 [2024-11-05 15:55:49.910434] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:17.637 [2024-11-05 15:55:49.910675] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:26:17.637 [2024-11-05 15:55:49.910812] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:26:17.637 [2024-11-05 15:55:49.910824] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:26:17.637 [2024-11-05 15:55:49.910971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:17.637 "name": "raid_bdev1", 00:26:17.637 "uuid": "7ee7909d-1178-48a5-9180-c039c22d9bbb", 00:26:17.637 "strip_size_kb": 64, 00:26:17.637 "state": "online", 00:26:17.637 "raid_level": "raid0", 00:26:17.637 "superblock": true, 00:26:17.637 "num_base_bdevs": 3, 00:26:17.637 "num_base_bdevs_discovered": 3, 00:26:17.637 "num_base_bdevs_operational": 3, 00:26:17.637 "base_bdevs_list": [ 00:26:17.637 { 00:26:17.637 "name": "BaseBdev1", 00:26:17.637 "uuid": "1b1bf05e-59c5-5ef1-8a67-b488d8a00e3c", 00:26:17.637 "is_configured": true, 00:26:17.637 "data_offset": 2048, 00:26:17.637 "data_size": 63488 00:26:17.637 }, 00:26:17.637 { 00:26:17.637 "name": "BaseBdev2", 00:26:17.637 "uuid": "c4e05be3-a1cc-56e8-90cd-79cf23972cea", 00:26:17.637 "is_configured": true, 00:26:17.637 "data_offset": 2048, 00:26:17.637 "data_size": 63488 00:26:17.637 }, 00:26:17.637 { 00:26:17.637 "name": "BaseBdev3", 00:26:17.637 "uuid": "0d5d9bce-d95c-58b5-93a1-48bd69b3eb79", 00:26:17.637 "is_configured": true, 00:26:17.637 "data_offset": 2048, 00:26:17.637 "data_size": 63488 00:26:17.637 } 00:26:17.637 ] 00:26:17.637 }' 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:17.637 15:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.897 15:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:26:17.897 15:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:26:17.897 [2024-11-05 15:55:50.301333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:26:18.830 15:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:26:18.830 15:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.830 15:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.830 15:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.830 15:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:26:18.830 15:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:26:18.830 15:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:26:18.830 15:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:26:18.830 15:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:18.830 15:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:18.830 15:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:18.830 15:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:18.830 15:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:18.830 15:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:18.830 15:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:18.830 15:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:18.830 15:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:18.830 15:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:18.830 15:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.088 15:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:19.088 15:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.088 15:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.088 15:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:19.088 "name": "raid_bdev1", 00:26:19.088 "uuid": "7ee7909d-1178-48a5-9180-c039c22d9bbb", 00:26:19.088 "strip_size_kb": 64, 00:26:19.088 "state": "online", 00:26:19.088 "raid_level": "raid0", 00:26:19.088 "superblock": true, 00:26:19.088 "num_base_bdevs": 3, 00:26:19.088 "num_base_bdevs_discovered": 3, 00:26:19.088 "num_base_bdevs_operational": 3, 00:26:19.088 "base_bdevs_list": [ 00:26:19.088 { 00:26:19.088 "name": "BaseBdev1", 00:26:19.088 "uuid": "1b1bf05e-59c5-5ef1-8a67-b488d8a00e3c", 00:26:19.088 "is_configured": true, 00:26:19.088 "data_offset": 2048, 00:26:19.088 "data_size": 63488 00:26:19.088 }, 00:26:19.088 { 00:26:19.088 "name": "BaseBdev2", 00:26:19.088 "uuid": "c4e05be3-a1cc-56e8-90cd-79cf23972cea", 00:26:19.088 "is_configured": true, 00:26:19.088 "data_offset": 2048, 00:26:19.088 "data_size": 63488 00:26:19.088 }, 00:26:19.088 { 00:26:19.088 "name": "BaseBdev3", 00:26:19.088 "uuid": "0d5d9bce-d95c-58b5-93a1-48bd69b3eb79", 00:26:19.088 "is_configured": true, 00:26:19.088 "data_offset": 2048, 00:26:19.088 "data_size": 63488 00:26:19.088 } 00:26:19.088 ] 00:26:19.088 }' 00:26:19.088 15:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:19.088 15:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.348 15:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:19.348 15:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.348 15:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.348 [2024-11-05 15:55:51.555212] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:19.348 [2024-11-05 15:55:51.555356] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:19.348 [2024-11-05 15:55:51.558435] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:19.348 [2024-11-05 15:55:51.558570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:19.348 [2024-11-05 15:55:51.558631] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:26:19.348 "results": [ 00:26:19.348 { 00:26:19.348 "job": "raid_bdev1", 00:26:19.348 "core_mask": "0x1", 00:26:19.348 "workload": "randrw", 00:26:19.348 "percentage": 50, 00:26:19.348 "status": "finished", 00:26:19.348 "queue_depth": 1, 00:26:19.348 "io_size": 131072, 00:26:19.348 "runtime": 1.252067, 00:26:19.348 "iops": 15154.141112256772, 00:26:19.348 "mibps": 1894.2676390320964, 00:26:19.348 "io_failed": 1, 00:26:19.348 "io_timeout": 0, 00:26:19.348 "avg_latency_us": 90.12640470254384, 00:26:19.348 "min_latency_us": 33.28, 00:26:19.348 "max_latency_us": 1688.8123076923077 00:26:19.348 } 00:26:19.348 ], 00:26:19.348 "core_count": 1 00:26:19.348 } 00:26:19.348 ee all in destruct 00:26:19.348 [2024-11-05 15:55:51.559001] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:26:19.348 15:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.348 15:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63784 00:26:19.348 15:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 63784 ']' 00:26:19.348 15:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 63784 00:26:19.348 15:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:26:19.348 15:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:19.348 15:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63784 00:26:19.348 killing process with pid 63784 00:26:19.348 15:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:19.348 15:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:19.348 15:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63784' 00:26:19.348 15:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 63784 00:26:19.348 [2024-11-05 15:55:51.589575] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:19.348 15:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 63784 00:26:19.348 [2024-11-05 15:55:51.728710] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:20.293 15:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:26:20.293 15:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.61n3o855i7 00:26:20.293 15:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:26:20.293 15:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.80 00:26:20.293 15:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:26:20.293 15:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:20.293 15:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:26:20.293 15:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.80 != \0\.\0\0 ]] 00:26:20.293 00:26:20.293 real 0m3.543s 00:26:20.293 user 0m4.221s 00:26:20.293 sys 0m0.387s 00:26:20.293 15:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:20.293 15:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.293 ************************************ 00:26:20.293 END TEST raid_write_error_test 00:26:20.293 ************************************ 00:26:20.293 15:55:52 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:26:20.293 15:55:52 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:26:20.293 15:55:52 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:26:20.293 15:55:52 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:20.293 15:55:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:20.293 ************************************ 00:26:20.293 START TEST raid_state_function_test 00:26:20.293 ************************************ 00:26:20.293 15:55:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 false 00:26:20.293 15:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:26:20.293 15:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:26:20.293 15:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:26:20.293 15:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:26:20.293 15:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:26:20.293 15:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:20.293 15:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:26:20.293 15:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:20.293 15:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:20.293 15:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:26:20.293 15:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:20.293 15:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:20.293 15:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:26:20.293 15:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:20.293 15:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:20.293 15:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:26:20.293 15:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:26:20.293 15:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:26:20.293 15:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:26:20.293 15:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:26:20.293 15:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:26:20.293 15:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:26:20.293 15:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:26:20.293 15:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:26:20.293 15:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:26:20.293 15:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:26:20.293 Process raid pid: 63916 00:26:20.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:20.293 15:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63916 00:26:20.293 15:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63916' 00:26:20.293 15:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63916 00:26:20.293 15:55:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 63916 ']' 00:26:20.293 15:55:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:20.293 15:55:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:20.293 15:55:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:20.293 15:55:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:20.293 15:55:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.293 15:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:26:20.293 [2024-11-05 15:55:52.478741] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:26:20.293 [2024-11-05 15:55:52.478835] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:20.293 [2024-11-05 15:55:52.629331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.552 [2024-11-05 15:55:52.714166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.552 [2024-11-05 15:55:52.827253] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:20.552 [2024-11-05 15:55:52.827278] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:21.118 15:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:21.118 15:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:26:21.118 15:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:21.118 15:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.118 15:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.118 [2024-11-05 15:55:53.336023] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:21.118 [2024-11-05 15:55:53.336067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:21.118 [2024-11-05 15:55:53.336076] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:21.118 [2024-11-05 15:55:53.336084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:21.118 [2024-11-05 15:55:53.336090] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:21.118 [2024-11-05 15:55:53.336097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:21.118 15:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.118 15:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:26:21.118 15:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:21.118 15:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:21.118 15:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:21.118 15:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:21.118 15:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:21.118 15:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:21.118 15:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:21.118 15:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:21.118 15:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:21.118 15:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:21.118 15:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.118 15:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.119 15:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:21.119 15:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.119 15:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:21.119 "name": "Existed_Raid", 00:26:21.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:21.119 "strip_size_kb": 64, 00:26:21.119 "state": "configuring", 00:26:21.119 "raid_level": "concat", 00:26:21.119 "superblock": false, 00:26:21.119 "num_base_bdevs": 3, 00:26:21.119 "num_base_bdevs_discovered": 0, 00:26:21.119 "num_base_bdevs_operational": 3, 00:26:21.119 "base_bdevs_list": [ 00:26:21.119 { 00:26:21.119 "name": "BaseBdev1", 00:26:21.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:21.119 "is_configured": false, 00:26:21.119 "data_offset": 0, 00:26:21.119 "data_size": 0 00:26:21.119 }, 00:26:21.119 { 00:26:21.119 "name": "BaseBdev2", 00:26:21.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:21.119 "is_configured": false, 00:26:21.119 "data_offset": 0, 00:26:21.119 "data_size": 0 00:26:21.119 }, 00:26:21.119 { 00:26:21.119 "name": "BaseBdev3", 00:26:21.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:21.119 "is_configured": false, 00:26:21.119 "data_offset": 0, 00:26:21.119 "data_size": 0 00:26:21.119 } 00:26:21.119 ] 00:26:21.119 }' 00:26:21.119 15:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:21.119 15:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.376 15:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:21.376 15:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.376 15:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.376 [2024-11-05 15:55:53.660044] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:21.376 [2024-11-05 15:55:53.660073] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:26:21.376 15:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.377 [2024-11-05 15:55:53.668039] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:21.377 [2024-11-05 15:55:53.668073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:21.377 [2024-11-05 15:55:53.668080] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:21.377 [2024-11-05 15:55:53.668088] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:21.377 [2024-11-05 15:55:53.668093] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:21.377 [2024-11-05 15:55:53.668100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.377 BaseBdev1 00:26:21.377 [2024-11-05 15:55:53.697215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.377 [ 00:26:21.377 { 00:26:21.377 "name": "BaseBdev1", 00:26:21.377 "aliases": [ 00:26:21.377 "7a285b28-73c3-4f45-8155-c413738cf8e3" 00:26:21.377 ], 00:26:21.377 "product_name": "Malloc disk", 00:26:21.377 "block_size": 512, 00:26:21.377 "num_blocks": 65536, 00:26:21.377 "uuid": "7a285b28-73c3-4f45-8155-c413738cf8e3", 00:26:21.377 "assigned_rate_limits": { 00:26:21.377 "rw_ios_per_sec": 0, 00:26:21.377 "rw_mbytes_per_sec": 0, 00:26:21.377 "r_mbytes_per_sec": 0, 00:26:21.377 "w_mbytes_per_sec": 0 00:26:21.377 }, 00:26:21.377 "claimed": true, 00:26:21.377 "claim_type": "exclusive_write", 00:26:21.377 "zoned": false, 00:26:21.377 "supported_io_types": { 00:26:21.377 "read": true, 00:26:21.377 "write": true, 00:26:21.377 "unmap": true, 00:26:21.377 "flush": true, 00:26:21.377 "reset": true, 00:26:21.377 "nvme_admin": false, 00:26:21.377 "nvme_io": false, 00:26:21.377 "nvme_io_md": false, 00:26:21.377 "write_zeroes": true, 00:26:21.377 "zcopy": true, 00:26:21.377 "get_zone_info": false, 00:26:21.377 "zone_management": false, 00:26:21.377 "zone_append": false, 00:26:21.377 "compare": false, 00:26:21.377 "compare_and_write": false, 00:26:21.377 "abort": true, 00:26:21.377 "seek_hole": false, 00:26:21.377 "seek_data": false, 00:26:21.377 "copy": true, 00:26:21.377 "nvme_iov_md": false 00:26:21.377 }, 00:26:21.377 "memory_domains": [ 00:26:21.377 { 00:26:21.377 "dma_device_id": "system", 00:26:21.377 "dma_device_type": 1 00:26:21.377 }, 00:26:21.377 { 00:26:21.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:21.377 "dma_device_type": 2 00:26:21.377 } 00:26:21.377 ], 00:26:21.377 "driver_specific": {} 00:26:21.377 } 00:26:21.377 ] 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:21.377 "name": "Existed_Raid", 00:26:21.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:21.377 "strip_size_kb": 64, 00:26:21.377 "state": "configuring", 00:26:21.377 "raid_level": "concat", 00:26:21.377 "superblock": false, 00:26:21.377 "num_base_bdevs": 3, 00:26:21.377 "num_base_bdevs_discovered": 1, 00:26:21.377 "num_base_bdevs_operational": 3, 00:26:21.377 "base_bdevs_list": [ 00:26:21.377 { 00:26:21.377 "name": "BaseBdev1", 00:26:21.377 "uuid": "7a285b28-73c3-4f45-8155-c413738cf8e3", 00:26:21.377 "is_configured": true, 00:26:21.377 "data_offset": 0, 00:26:21.377 "data_size": 65536 00:26:21.377 }, 00:26:21.377 { 00:26:21.377 "name": "BaseBdev2", 00:26:21.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:21.377 "is_configured": false, 00:26:21.377 "data_offset": 0, 00:26:21.377 "data_size": 0 00:26:21.377 }, 00:26:21.377 { 00:26:21.377 "name": "BaseBdev3", 00:26:21.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:21.377 "is_configured": false, 00:26:21.377 "data_offset": 0, 00:26:21.377 "data_size": 0 00:26:21.377 } 00:26:21.377 ] 00:26:21.377 }' 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:21.377 15:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.635 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:21.635 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.635 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.635 [2024-11-05 15:55:54.025326] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:21.635 [2024-11-05 15:55:54.025367] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:26:21.635 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.635 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:21.635 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.635 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.635 [2024-11-05 15:55:54.033364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:21.635 [2024-11-05 15:55:54.035001] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:21.635 [2024-11-05 15:55:54.035034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:21.635 [2024-11-05 15:55:54.035041] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:21.635 [2024-11-05 15:55:54.035048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:21.635 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.635 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:26:21.635 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:21.635 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:26:21.635 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:21.635 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:21.635 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:21.635 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:21.635 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:21.635 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:21.635 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:21.635 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:21.635 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:21.635 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:21.635 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:21.635 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.635 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.892 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.892 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:21.892 "name": "Existed_Raid", 00:26:21.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:21.892 "strip_size_kb": 64, 00:26:21.892 "state": "configuring", 00:26:21.892 "raid_level": "concat", 00:26:21.892 "superblock": false, 00:26:21.892 "num_base_bdevs": 3, 00:26:21.892 "num_base_bdevs_discovered": 1, 00:26:21.892 "num_base_bdevs_operational": 3, 00:26:21.892 "base_bdevs_list": [ 00:26:21.892 { 00:26:21.892 "name": "BaseBdev1", 00:26:21.892 "uuid": "7a285b28-73c3-4f45-8155-c413738cf8e3", 00:26:21.892 "is_configured": true, 00:26:21.892 "data_offset": 0, 00:26:21.892 "data_size": 65536 00:26:21.892 }, 00:26:21.892 { 00:26:21.892 "name": "BaseBdev2", 00:26:21.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:21.892 "is_configured": false, 00:26:21.892 "data_offset": 0, 00:26:21.892 "data_size": 0 00:26:21.892 }, 00:26:21.892 { 00:26:21.892 "name": "BaseBdev3", 00:26:21.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:21.892 "is_configured": false, 00:26:21.892 "data_offset": 0, 00:26:21.892 "data_size": 0 00:26:21.892 } 00:26:21.892 ] 00:26:21.892 }' 00:26:21.892 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:21.892 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.151 [2024-11-05 15:55:54.369016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:22.151 BaseBdev2 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.151 [ 00:26:22.151 { 00:26:22.151 "name": "BaseBdev2", 00:26:22.151 "aliases": [ 00:26:22.151 "eb2ebcc3-7ac9-474a-98f4-b0016c44aa41" 00:26:22.151 ], 00:26:22.151 "product_name": "Malloc disk", 00:26:22.151 "block_size": 512, 00:26:22.151 "num_blocks": 65536, 00:26:22.151 "uuid": "eb2ebcc3-7ac9-474a-98f4-b0016c44aa41", 00:26:22.151 "assigned_rate_limits": { 00:26:22.151 "rw_ios_per_sec": 0, 00:26:22.151 "rw_mbytes_per_sec": 0, 00:26:22.151 "r_mbytes_per_sec": 0, 00:26:22.151 "w_mbytes_per_sec": 0 00:26:22.151 }, 00:26:22.151 "claimed": true, 00:26:22.151 "claim_type": "exclusive_write", 00:26:22.151 "zoned": false, 00:26:22.151 "supported_io_types": { 00:26:22.151 "read": true, 00:26:22.151 "write": true, 00:26:22.151 "unmap": true, 00:26:22.151 "flush": true, 00:26:22.151 "reset": true, 00:26:22.151 "nvme_admin": false, 00:26:22.151 "nvme_io": false, 00:26:22.151 "nvme_io_md": false, 00:26:22.151 "write_zeroes": true, 00:26:22.151 "zcopy": true, 00:26:22.151 "get_zone_info": false, 00:26:22.151 "zone_management": false, 00:26:22.151 "zone_append": false, 00:26:22.151 "compare": false, 00:26:22.151 "compare_and_write": false, 00:26:22.151 "abort": true, 00:26:22.151 "seek_hole": false, 00:26:22.151 "seek_data": false, 00:26:22.151 "copy": true, 00:26:22.151 "nvme_iov_md": false 00:26:22.151 }, 00:26:22.151 "memory_domains": [ 00:26:22.151 { 00:26:22.151 "dma_device_id": "system", 00:26:22.151 "dma_device_type": 1 00:26:22.151 }, 00:26:22.151 { 00:26:22.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:22.151 "dma_device_type": 2 00:26:22.151 } 00:26:22.151 ], 00:26:22.151 "driver_specific": {} 00:26:22.151 } 00:26:22.151 ] 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:22.151 "name": "Existed_Raid", 00:26:22.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:22.151 "strip_size_kb": 64, 00:26:22.151 "state": "configuring", 00:26:22.151 "raid_level": "concat", 00:26:22.151 "superblock": false, 00:26:22.151 "num_base_bdevs": 3, 00:26:22.151 "num_base_bdevs_discovered": 2, 00:26:22.151 "num_base_bdevs_operational": 3, 00:26:22.151 "base_bdevs_list": [ 00:26:22.151 { 00:26:22.151 "name": "BaseBdev1", 00:26:22.151 "uuid": "7a285b28-73c3-4f45-8155-c413738cf8e3", 00:26:22.151 "is_configured": true, 00:26:22.151 "data_offset": 0, 00:26:22.151 "data_size": 65536 00:26:22.151 }, 00:26:22.151 { 00:26:22.151 "name": "BaseBdev2", 00:26:22.151 "uuid": "eb2ebcc3-7ac9-474a-98f4-b0016c44aa41", 00:26:22.151 "is_configured": true, 00:26:22.151 "data_offset": 0, 00:26:22.151 "data_size": 65536 00:26:22.151 }, 00:26:22.151 { 00:26:22.151 "name": "BaseBdev3", 00:26:22.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:22.151 "is_configured": false, 00:26:22.151 "data_offset": 0, 00:26:22.151 "data_size": 0 00:26:22.151 } 00:26:22.151 ] 00:26:22.151 }' 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:22.151 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.410 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:26:22.410 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.410 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.410 [2024-11-05 15:55:54.734889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:22.410 [2024-11-05 15:55:54.735030] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:22.410 [2024-11-05 15:55:54.735048] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:26:22.410 [2024-11-05 15:55:54.735273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:26:22.410 [2024-11-05 15:55:54.735394] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:22.410 [2024-11-05 15:55:54.735401] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:26:22.410 [2024-11-05 15:55:54.735595] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:22.410 BaseBdev3 00:26:22.410 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.410 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:26:22.410 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:26:22.410 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:22.410 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:26:22.410 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:22.410 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:22.410 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:22.410 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.410 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.410 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.410 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:22.410 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.410 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.410 [ 00:26:22.410 { 00:26:22.410 "name": "BaseBdev3", 00:26:22.410 "aliases": [ 00:26:22.410 "af65f682-fabe-45b0-ae6b-e0379647f55e" 00:26:22.410 ], 00:26:22.410 "product_name": "Malloc disk", 00:26:22.410 "block_size": 512, 00:26:22.410 "num_blocks": 65536, 00:26:22.410 "uuid": "af65f682-fabe-45b0-ae6b-e0379647f55e", 00:26:22.410 "assigned_rate_limits": { 00:26:22.410 "rw_ios_per_sec": 0, 00:26:22.410 "rw_mbytes_per_sec": 0, 00:26:22.410 "r_mbytes_per_sec": 0, 00:26:22.410 "w_mbytes_per_sec": 0 00:26:22.410 }, 00:26:22.410 "claimed": true, 00:26:22.410 "claim_type": "exclusive_write", 00:26:22.410 "zoned": false, 00:26:22.410 "supported_io_types": { 00:26:22.410 "read": true, 00:26:22.410 "write": true, 00:26:22.410 "unmap": true, 00:26:22.410 "flush": true, 00:26:22.410 "reset": true, 00:26:22.410 "nvme_admin": false, 00:26:22.410 "nvme_io": false, 00:26:22.410 "nvme_io_md": false, 00:26:22.410 "write_zeroes": true, 00:26:22.410 "zcopy": true, 00:26:22.410 "get_zone_info": false, 00:26:22.410 "zone_management": false, 00:26:22.410 "zone_append": false, 00:26:22.410 "compare": false, 00:26:22.410 "compare_and_write": false, 00:26:22.410 "abort": true, 00:26:22.410 "seek_hole": false, 00:26:22.410 "seek_data": false, 00:26:22.410 "copy": true, 00:26:22.410 "nvme_iov_md": false 00:26:22.410 }, 00:26:22.410 "memory_domains": [ 00:26:22.410 { 00:26:22.410 "dma_device_id": "system", 00:26:22.410 "dma_device_type": 1 00:26:22.410 }, 00:26:22.410 { 00:26:22.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:22.410 "dma_device_type": 2 00:26:22.410 } 00:26:22.410 ], 00:26:22.410 "driver_specific": {} 00:26:22.410 } 00:26:22.410 ] 00:26:22.410 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.410 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:26:22.410 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:22.410 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:22.410 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:26:22.410 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:22.410 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:22.410 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:22.410 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:22.410 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:22.410 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:22.410 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:22.410 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:22.410 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:22.410 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:22.410 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:22.411 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.411 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.411 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.411 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:22.411 "name": "Existed_Raid", 00:26:22.411 "uuid": "be88708f-2245-414f-bddc-57a41d2f54e6", 00:26:22.411 "strip_size_kb": 64, 00:26:22.411 "state": "online", 00:26:22.411 "raid_level": "concat", 00:26:22.411 "superblock": false, 00:26:22.411 "num_base_bdevs": 3, 00:26:22.411 "num_base_bdevs_discovered": 3, 00:26:22.411 "num_base_bdevs_operational": 3, 00:26:22.411 "base_bdevs_list": [ 00:26:22.411 { 00:26:22.411 "name": "BaseBdev1", 00:26:22.411 "uuid": "7a285b28-73c3-4f45-8155-c413738cf8e3", 00:26:22.411 "is_configured": true, 00:26:22.411 "data_offset": 0, 00:26:22.411 "data_size": 65536 00:26:22.411 }, 00:26:22.411 { 00:26:22.411 "name": "BaseBdev2", 00:26:22.411 "uuid": "eb2ebcc3-7ac9-474a-98f4-b0016c44aa41", 00:26:22.411 "is_configured": true, 00:26:22.411 "data_offset": 0, 00:26:22.411 "data_size": 65536 00:26:22.411 }, 00:26:22.411 { 00:26:22.411 "name": "BaseBdev3", 00:26:22.411 "uuid": "af65f682-fabe-45b0-ae6b-e0379647f55e", 00:26:22.411 "is_configured": true, 00:26:22.411 "data_offset": 0, 00:26:22.411 "data_size": 65536 00:26:22.411 } 00:26:22.411 ] 00:26:22.411 }' 00:26:22.411 15:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:22.411 15:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.669 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:26:22.669 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:22.669 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:22.669 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:22.669 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:22.669 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:22.669 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:22.669 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:22.669 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.669 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.669 [2024-11-05 15:55:55.075236] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:22.928 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.928 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:22.928 "name": "Existed_Raid", 00:26:22.928 "aliases": [ 00:26:22.928 "be88708f-2245-414f-bddc-57a41d2f54e6" 00:26:22.928 ], 00:26:22.928 "product_name": "Raid Volume", 00:26:22.928 "block_size": 512, 00:26:22.928 "num_blocks": 196608, 00:26:22.928 "uuid": "be88708f-2245-414f-bddc-57a41d2f54e6", 00:26:22.928 "assigned_rate_limits": { 00:26:22.928 "rw_ios_per_sec": 0, 00:26:22.928 "rw_mbytes_per_sec": 0, 00:26:22.928 "r_mbytes_per_sec": 0, 00:26:22.928 "w_mbytes_per_sec": 0 00:26:22.928 }, 00:26:22.928 "claimed": false, 00:26:22.928 "zoned": false, 00:26:22.928 "supported_io_types": { 00:26:22.928 "read": true, 00:26:22.928 "write": true, 00:26:22.928 "unmap": true, 00:26:22.928 "flush": true, 00:26:22.928 "reset": true, 00:26:22.928 "nvme_admin": false, 00:26:22.928 "nvme_io": false, 00:26:22.928 "nvme_io_md": false, 00:26:22.928 "write_zeroes": true, 00:26:22.928 "zcopy": false, 00:26:22.928 "get_zone_info": false, 00:26:22.928 "zone_management": false, 00:26:22.928 "zone_append": false, 00:26:22.928 "compare": false, 00:26:22.928 "compare_and_write": false, 00:26:22.928 "abort": false, 00:26:22.928 "seek_hole": false, 00:26:22.928 "seek_data": false, 00:26:22.928 "copy": false, 00:26:22.928 "nvme_iov_md": false 00:26:22.928 }, 00:26:22.928 "memory_domains": [ 00:26:22.928 { 00:26:22.928 "dma_device_id": "system", 00:26:22.928 "dma_device_type": 1 00:26:22.928 }, 00:26:22.928 { 00:26:22.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:22.928 "dma_device_type": 2 00:26:22.928 }, 00:26:22.928 { 00:26:22.928 "dma_device_id": "system", 00:26:22.928 "dma_device_type": 1 00:26:22.928 }, 00:26:22.928 { 00:26:22.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:22.928 "dma_device_type": 2 00:26:22.928 }, 00:26:22.928 { 00:26:22.928 "dma_device_id": "system", 00:26:22.928 "dma_device_type": 1 00:26:22.928 }, 00:26:22.928 { 00:26:22.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:22.928 "dma_device_type": 2 00:26:22.928 } 00:26:22.928 ], 00:26:22.928 "driver_specific": { 00:26:22.928 "raid": { 00:26:22.928 "uuid": "be88708f-2245-414f-bddc-57a41d2f54e6", 00:26:22.928 "strip_size_kb": 64, 00:26:22.928 "state": "online", 00:26:22.928 "raid_level": "concat", 00:26:22.928 "superblock": false, 00:26:22.928 "num_base_bdevs": 3, 00:26:22.928 "num_base_bdevs_discovered": 3, 00:26:22.928 "num_base_bdevs_operational": 3, 00:26:22.928 "base_bdevs_list": [ 00:26:22.928 { 00:26:22.928 "name": "BaseBdev1", 00:26:22.928 "uuid": "7a285b28-73c3-4f45-8155-c413738cf8e3", 00:26:22.928 "is_configured": true, 00:26:22.928 "data_offset": 0, 00:26:22.928 "data_size": 65536 00:26:22.928 }, 00:26:22.928 { 00:26:22.928 "name": "BaseBdev2", 00:26:22.928 "uuid": "eb2ebcc3-7ac9-474a-98f4-b0016c44aa41", 00:26:22.928 "is_configured": true, 00:26:22.928 "data_offset": 0, 00:26:22.928 "data_size": 65536 00:26:22.928 }, 00:26:22.928 { 00:26:22.928 "name": "BaseBdev3", 00:26:22.928 "uuid": "af65f682-fabe-45b0-ae6b-e0379647f55e", 00:26:22.928 "is_configured": true, 00:26:22.928 "data_offset": 0, 00:26:22.928 "data_size": 65536 00:26:22.928 } 00:26:22.928 ] 00:26:22.928 } 00:26:22.928 } 00:26:22.928 }' 00:26:22.928 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:22.928 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:26:22.928 BaseBdev2 00:26:22.928 BaseBdev3' 00:26:22.928 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:22.928 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:22.928 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:22.928 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.929 [2024-11-05 15:55:55.279057] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:22.929 [2024-11-05 15:55:55.279080] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:22.929 [2024-11-05 15:55:55.279121] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.929 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:23.248 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.248 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:23.248 "name": "Existed_Raid", 00:26:23.248 "uuid": "be88708f-2245-414f-bddc-57a41d2f54e6", 00:26:23.248 "strip_size_kb": 64, 00:26:23.248 "state": "offline", 00:26:23.248 "raid_level": "concat", 00:26:23.248 "superblock": false, 00:26:23.248 "num_base_bdevs": 3, 00:26:23.248 "num_base_bdevs_discovered": 2, 00:26:23.248 "num_base_bdevs_operational": 2, 00:26:23.248 "base_bdevs_list": [ 00:26:23.248 { 00:26:23.248 "name": null, 00:26:23.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:23.248 "is_configured": false, 00:26:23.248 "data_offset": 0, 00:26:23.248 "data_size": 65536 00:26:23.248 }, 00:26:23.248 { 00:26:23.248 "name": "BaseBdev2", 00:26:23.248 "uuid": "eb2ebcc3-7ac9-474a-98f4-b0016c44aa41", 00:26:23.248 "is_configured": true, 00:26:23.248 "data_offset": 0, 00:26:23.248 "data_size": 65536 00:26:23.248 }, 00:26:23.248 { 00:26:23.248 "name": "BaseBdev3", 00:26:23.248 "uuid": "af65f682-fabe-45b0-ae6b-e0379647f55e", 00:26:23.248 "is_configured": true, 00:26:23.248 "data_offset": 0, 00:26:23.248 "data_size": 65536 00:26:23.248 } 00:26:23.248 ] 00:26:23.248 }' 00:26:23.248 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:23.248 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:23.248 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:26:23.248 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:23.248 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:23.248 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:23.248 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.248 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:23.248 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:23.507 [2024-11-05 15:55:55.678597] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:23.507 [2024-11-05 15:55:55.760920] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:23.507 [2024-11-05 15:55:55.761041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:23.507 BaseBdev2 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.507 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:23.507 [ 00:26:23.507 { 00:26:23.507 "name": "BaseBdev2", 00:26:23.507 "aliases": [ 00:26:23.507 "2202053d-e82f-4255-9cbb-cab1f984ceed" 00:26:23.507 ], 00:26:23.507 "product_name": "Malloc disk", 00:26:23.507 "block_size": 512, 00:26:23.507 "num_blocks": 65536, 00:26:23.507 "uuid": "2202053d-e82f-4255-9cbb-cab1f984ceed", 00:26:23.507 "assigned_rate_limits": { 00:26:23.507 "rw_ios_per_sec": 0, 00:26:23.507 "rw_mbytes_per_sec": 0, 00:26:23.507 "r_mbytes_per_sec": 0, 00:26:23.507 "w_mbytes_per_sec": 0 00:26:23.507 }, 00:26:23.507 "claimed": false, 00:26:23.507 "zoned": false, 00:26:23.507 "supported_io_types": { 00:26:23.507 "read": true, 00:26:23.507 "write": true, 00:26:23.507 "unmap": true, 00:26:23.507 "flush": true, 00:26:23.507 "reset": true, 00:26:23.507 "nvme_admin": false, 00:26:23.507 "nvme_io": false, 00:26:23.507 "nvme_io_md": false, 00:26:23.507 "write_zeroes": true, 00:26:23.507 "zcopy": true, 00:26:23.507 "get_zone_info": false, 00:26:23.507 "zone_management": false, 00:26:23.507 "zone_append": false, 00:26:23.507 "compare": false, 00:26:23.507 "compare_and_write": false, 00:26:23.507 "abort": true, 00:26:23.507 "seek_hole": false, 00:26:23.507 "seek_data": false, 00:26:23.508 "copy": true, 00:26:23.508 "nvme_iov_md": false 00:26:23.508 }, 00:26:23.508 "memory_domains": [ 00:26:23.508 { 00:26:23.508 "dma_device_id": "system", 00:26:23.508 "dma_device_type": 1 00:26:23.508 }, 00:26:23.508 { 00:26:23.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:23.508 "dma_device_type": 2 00:26:23.508 } 00:26:23.508 ], 00:26:23.508 "driver_specific": {} 00:26:23.508 } 00:26:23.508 ] 00:26:23.508 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.508 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:26:23.508 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:23.508 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:23.508 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:26:23.508 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.508 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:23.508 BaseBdev3 00:26:23.508 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.508 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:26:23.508 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:26:23.508 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:23.508 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:26:23.508 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:23.508 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:23.508 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:23.508 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.508 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:23.766 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.766 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:23.766 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.766 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:23.766 [ 00:26:23.766 { 00:26:23.766 "name": "BaseBdev3", 00:26:23.766 "aliases": [ 00:26:23.766 "3cbb5612-8c99-415d-b535-e448bca44937" 00:26:23.766 ], 00:26:23.766 "product_name": "Malloc disk", 00:26:23.766 "block_size": 512, 00:26:23.766 "num_blocks": 65536, 00:26:23.766 "uuid": "3cbb5612-8c99-415d-b535-e448bca44937", 00:26:23.766 "assigned_rate_limits": { 00:26:23.766 "rw_ios_per_sec": 0, 00:26:23.766 "rw_mbytes_per_sec": 0, 00:26:23.766 "r_mbytes_per_sec": 0, 00:26:23.766 "w_mbytes_per_sec": 0 00:26:23.766 }, 00:26:23.766 "claimed": false, 00:26:23.766 "zoned": false, 00:26:23.766 "supported_io_types": { 00:26:23.766 "read": true, 00:26:23.766 "write": true, 00:26:23.766 "unmap": true, 00:26:23.766 "flush": true, 00:26:23.766 "reset": true, 00:26:23.766 "nvme_admin": false, 00:26:23.766 "nvme_io": false, 00:26:23.766 "nvme_io_md": false, 00:26:23.766 "write_zeroes": true, 00:26:23.766 "zcopy": true, 00:26:23.766 "get_zone_info": false, 00:26:23.766 "zone_management": false, 00:26:23.766 "zone_append": false, 00:26:23.766 "compare": false, 00:26:23.766 "compare_and_write": false, 00:26:23.766 "abort": true, 00:26:23.766 "seek_hole": false, 00:26:23.766 "seek_data": false, 00:26:23.766 "copy": true, 00:26:23.766 "nvme_iov_md": false 00:26:23.766 }, 00:26:23.767 "memory_domains": [ 00:26:23.767 { 00:26:23.767 "dma_device_id": "system", 00:26:23.767 "dma_device_type": 1 00:26:23.767 }, 00:26:23.767 { 00:26:23.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:23.767 "dma_device_type": 2 00:26:23.767 } 00:26:23.767 ], 00:26:23.767 "driver_specific": {} 00:26:23.767 } 00:26:23.767 ] 00:26:23.767 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.767 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:26:23.767 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:23.767 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:23.767 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:23.767 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.767 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:23.767 [2024-11-05 15:55:55.947350] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:23.767 [2024-11-05 15:55:55.947387] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:23.767 [2024-11-05 15:55:55.947405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:23.767 [2024-11-05 15:55:55.948925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:23.767 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.767 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:26:23.767 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:23.767 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:23.767 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:23.767 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:23.767 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:23.767 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:23.767 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:23.767 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:23.767 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:23.767 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:23.767 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:23.767 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.767 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:23.767 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.767 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:23.767 "name": "Existed_Raid", 00:26:23.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:23.767 "strip_size_kb": 64, 00:26:23.767 "state": "configuring", 00:26:23.767 "raid_level": "concat", 00:26:23.767 "superblock": false, 00:26:23.767 "num_base_bdevs": 3, 00:26:23.767 "num_base_bdevs_discovered": 2, 00:26:23.767 "num_base_bdevs_operational": 3, 00:26:23.767 "base_bdevs_list": [ 00:26:23.767 { 00:26:23.767 "name": "BaseBdev1", 00:26:23.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:23.767 "is_configured": false, 00:26:23.767 "data_offset": 0, 00:26:23.767 "data_size": 0 00:26:23.767 }, 00:26:23.767 { 00:26:23.767 "name": "BaseBdev2", 00:26:23.767 "uuid": "2202053d-e82f-4255-9cbb-cab1f984ceed", 00:26:23.767 "is_configured": true, 00:26:23.767 "data_offset": 0, 00:26:23.767 "data_size": 65536 00:26:23.767 }, 00:26:23.767 { 00:26:23.767 "name": "BaseBdev3", 00:26:23.767 "uuid": "3cbb5612-8c99-415d-b535-e448bca44937", 00:26:23.767 "is_configured": true, 00:26:23.767 "data_offset": 0, 00:26:23.767 "data_size": 65536 00:26:23.767 } 00:26:23.767 ] 00:26:23.767 }' 00:26:23.767 15:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:23.767 15:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.025 15:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:26:24.025 15:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.025 15:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.025 [2024-11-05 15:55:56.287407] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:24.025 15:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.025 15:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:26:24.025 15:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:24.025 15:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:24.025 15:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:24.025 15:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:24.025 15:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:24.026 15:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:24.026 15:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:24.026 15:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:24.026 15:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:24.026 15:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:24.026 15:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.026 15:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.026 15:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:24.026 15:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.026 15:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:24.026 "name": "Existed_Raid", 00:26:24.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:24.026 "strip_size_kb": 64, 00:26:24.026 "state": "configuring", 00:26:24.026 "raid_level": "concat", 00:26:24.026 "superblock": false, 00:26:24.026 "num_base_bdevs": 3, 00:26:24.026 "num_base_bdevs_discovered": 1, 00:26:24.026 "num_base_bdevs_operational": 3, 00:26:24.026 "base_bdevs_list": [ 00:26:24.026 { 00:26:24.026 "name": "BaseBdev1", 00:26:24.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:24.026 "is_configured": false, 00:26:24.026 "data_offset": 0, 00:26:24.026 "data_size": 0 00:26:24.026 }, 00:26:24.026 { 00:26:24.026 "name": null, 00:26:24.026 "uuid": "2202053d-e82f-4255-9cbb-cab1f984ceed", 00:26:24.026 "is_configured": false, 00:26:24.026 "data_offset": 0, 00:26:24.026 "data_size": 65536 00:26:24.026 }, 00:26:24.026 { 00:26:24.026 "name": "BaseBdev3", 00:26:24.026 "uuid": "3cbb5612-8c99-415d-b535-e448bca44937", 00:26:24.026 "is_configured": true, 00:26:24.026 "data_offset": 0, 00:26:24.026 "data_size": 65536 00:26:24.026 } 00:26:24.026 ] 00:26:24.026 }' 00:26:24.026 15:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:24.026 15:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.285 15:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:24.285 15:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.285 15:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.285 15:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:24.285 15:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.285 15:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:26:24.285 15:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:24.285 15:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.285 15:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.285 [2024-11-05 15:55:56.661828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:24.285 BaseBdev1 00:26:24.285 15:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.285 15:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:26:24.285 15:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:26:24.285 15:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:24.285 15:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:26:24.285 15:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:24.285 15:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:24.285 15:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:24.285 15:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.285 15:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.285 15:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.285 15:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:24.285 15:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.285 15:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.285 [ 00:26:24.285 { 00:26:24.285 "name": "BaseBdev1", 00:26:24.285 "aliases": [ 00:26:24.285 "105b7b6c-e8ba-40bc-b87a-ad903a11122a" 00:26:24.285 ], 00:26:24.285 "product_name": "Malloc disk", 00:26:24.285 "block_size": 512, 00:26:24.285 "num_blocks": 65536, 00:26:24.285 "uuid": "105b7b6c-e8ba-40bc-b87a-ad903a11122a", 00:26:24.285 "assigned_rate_limits": { 00:26:24.285 "rw_ios_per_sec": 0, 00:26:24.285 "rw_mbytes_per_sec": 0, 00:26:24.285 "r_mbytes_per_sec": 0, 00:26:24.285 "w_mbytes_per_sec": 0 00:26:24.285 }, 00:26:24.285 "claimed": true, 00:26:24.285 "claim_type": "exclusive_write", 00:26:24.285 "zoned": false, 00:26:24.285 "supported_io_types": { 00:26:24.285 "read": true, 00:26:24.285 "write": true, 00:26:24.285 "unmap": true, 00:26:24.285 "flush": true, 00:26:24.285 "reset": true, 00:26:24.285 "nvme_admin": false, 00:26:24.285 "nvme_io": false, 00:26:24.285 "nvme_io_md": false, 00:26:24.285 "write_zeroes": true, 00:26:24.285 "zcopy": true, 00:26:24.285 "get_zone_info": false, 00:26:24.285 "zone_management": false, 00:26:24.285 "zone_append": false, 00:26:24.285 "compare": false, 00:26:24.285 "compare_and_write": false, 00:26:24.285 "abort": true, 00:26:24.285 "seek_hole": false, 00:26:24.285 "seek_data": false, 00:26:24.285 "copy": true, 00:26:24.285 "nvme_iov_md": false 00:26:24.285 }, 00:26:24.285 "memory_domains": [ 00:26:24.285 { 00:26:24.285 "dma_device_id": "system", 00:26:24.285 "dma_device_type": 1 00:26:24.285 }, 00:26:24.285 { 00:26:24.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:24.285 "dma_device_type": 2 00:26:24.285 } 00:26:24.285 ], 00:26:24.285 "driver_specific": {} 00:26:24.285 } 00:26:24.285 ] 00:26:24.285 15:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.285 15:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:26:24.286 15:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:26:24.286 15:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:24.286 15:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:24.286 15:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:24.286 15:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:24.286 15:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:24.286 15:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:24.286 15:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:24.286 15:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:24.286 15:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:24.286 15:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:24.286 15:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:24.286 15:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.286 15:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.543 15:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.543 15:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:24.543 "name": "Existed_Raid", 00:26:24.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:24.543 "strip_size_kb": 64, 00:26:24.543 "state": "configuring", 00:26:24.543 "raid_level": "concat", 00:26:24.543 "superblock": false, 00:26:24.543 "num_base_bdevs": 3, 00:26:24.543 "num_base_bdevs_discovered": 2, 00:26:24.543 "num_base_bdevs_operational": 3, 00:26:24.543 "base_bdevs_list": [ 00:26:24.543 { 00:26:24.543 "name": "BaseBdev1", 00:26:24.543 "uuid": "105b7b6c-e8ba-40bc-b87a-ad903a11122a", 00:26:24.543 "is_configured": true, 00:26:24.543 "data_offset": 0, 00:26:24.543 "data_size": 65536 00:26:24.543 }, 00:26:24.543 { 00:26:24.543 "name": null, 00:26:24.543 "uuid": "2202053d-e82f-4255-9cbb-cab1f984ceed", 00:26:24.543 "is_configured": false, 00:26:24.543 "data_offset": 0, 00:26:24.543 "data_size": 65536 00:26:24.543 }, 00:26:24.543 { 00:26:24.543 "name": "BaseBdev3", 00:26:24.544 "uuid": "3cbb5612-8c99-415d-b535-e448bca44937", 00:26:24.544 "is_configured": true, 00:26:24.544 "data_offset": 0, 00:26:24.544 "data_size": 65536 00:26:24.544 } 00:26:24.544 ] 00:26:24.544 }' 00:26:24.544 15:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:24.544 15:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.802 15:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:24.802 15:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:24.802 15:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.802 15:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.802 15:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.802 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:26:24.802 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:26:24.802 15:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.802 15:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.802 [2024-11-05 15:55:57.021947] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:24.802 15:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.802 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:26:24.802 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:24.802 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:24.802 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:24.802 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:24.802 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:24.802 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:24.802 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:24.802 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:24.802 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:24.802 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:24.802 15:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.802 15:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.802 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:24.802 15:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.802 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:24.802 "name": "Existed_Raid", 00:26:24.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:24.802 "strip_size_kb": 64, 00:26:24.802 "state": "configuring", 00:26:24.802 "raid_level": "concat", 00:26:24.802 "superblock": false, 00:26:24.802 "num_base_bdevs": 3, 00:26:24.802 "num_base_bdevs_discovered": 1, 00:26:24.802 "num_base_bdevs_operational": 3, 00:26:24.802 "base_bdevs_list": [ 00:26:24.803 { 00:26:24.803 "name": "BaseBdev1", 00:26:24.803 "uuid": "105b7b6c-e8ba-40bc-b87a-ad903a11122a", 00:26:24.803 "is_configured": true, 00:26:24.803 "data_offset": 0, 00:26:24.803 "data_size": 65536 00:26:24.803 }, 00:26:24.803 { 00:26:24.803 "name": null, 00:26:24.803 "uuid": "2202053d-e82f-4255-9cbb-cab1f984ceed", 00:26:24.803 "is_configured": false, 00:26:24.803 "data_offset": 0, 00:26:24.803 "data_size": 65536 00:26:24.803 }, 00:26:24.803 { 00:26:24.803 "name": null, 00:26:24.803 "uuid": "3cbb5612-8c99-415d-b535-e448bca44937", 00:26:24.803 "is_configured": false, 00:26:24.803 "data_offset": 0, 00:26:24.803 "data_size": 65536 00:26:24.803 } 00:26:24.803 ] 00:26:24.803 }' 00:26:24.803 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:24.803 15:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.059 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:25.059 15:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.059 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:25.059 15:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.059 15:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.059 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:26:25.059 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:26:25.059 15:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.059 15:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.059 [2024-11-05 15:55:57.362043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:25.059 15:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.059 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:26:25.059 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:25.059 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:25.059 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:25.059 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:25.060 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:25.060 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:25.060 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:25.060 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:25.060 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:25.060 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:25.060 15:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.060 15:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.060 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:25.060 15:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.060 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:25.060 "name": "Existed_Raid", 00:26:25.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:25.060 "strip_size_kb": 64, 00:26:25.060 "state": "configuring", 00:26:25.060 "raid_level": "concat", 00:26:25.060 "superblock": false, 00:26:25.060 "num_base_bdevs": 3, 00:26:25.060 "num_base_bdevs_discovered": 2, 00:26:25.060 "num_base_bdevs_operational": 3, 00:26:25.060 "base_bdevs_list": [ 00:26:25.060 { 00:26:25.060 "name": "BaseBdev1", 00:26:25.060 "uuid": "105b7b6c-e8ba-40bc-b87a-ad903a11122a", 00:26:25.060 "is_configured": true, 00:26:25.060 "data_offset": 0, 00:26:25.060 "data_size": 65536 00:26:25.060 }, 00:26:25.060 { 00:26:25.060 "name": null, 00:26:25.060 "uuid": "2202053d-e82f-4255-9cbb-cab1f984ceed", 00:26:25.060 "is_configured": false, 00:26:25.060 "data_offset": 0, 00:26:25.060 "data_size": 65536 00:26:25.060 }, 00:26:25.060 { 00:26:25.060 "name": "BaseBdev3", 00:26:25.060 "uuid": "3cbb5612-8c99-415d-b535-e448bca44937", 00:26:25.060 "is_configured": true, 00:26:25.060 "data_offset": 0, 00:26:25.060 "data_size": 65536 00:26:25.060 } 00:26:25.060 ] 00:26:25.060 }' 00:26:25.060 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:25.060 15:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.317 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:25.317 15:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.317 15:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.317 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:25.317 15:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.317 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:26:25.317 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:25.317 15:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.317 15:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.317 [2024-11-05 15:55:57.710118] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:25.575 15:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.575 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:26:25.575 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:25.575 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:25.575 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:25.575 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:25.575 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:25.575 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:25.575 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:25.575 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:25.575 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:25.575 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:25.575 15:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.575 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:25.575 15:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.575 15:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.575 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:25.575 "name": "Existed_Raid", 00:26:25.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:25.575 "strip_size_kb": 64, 00:26:25.575 "state": "configuring", 00:26:25.575 "raid_level": "concat", 00:26:25.575 "superblock": false, 00:26:25.575 "num_base_bdevs": 3, 00:26:25.575 "num_base_bdevs_discovered": 1, 00:26:25.575 "num_base_bdevs_operational": 3, 00:26:25.575 "base_bdevs_list": [ 00:26:25.575 { 00:26:25.575 "name": null, 00:26:25.575 "uuid": "105b7b6c-e8ba-40bc-b87a-ad903a11122a", 00:26:25.575 "is_configured": false, 00:26:25.575 "data_offset": 0, 00:26:25.575 "data_size": 65536 00:26:25.575 }, 00:26:25.575 { 00:26:25.575 "name": null, 00:26:25.575 "uuid": "2202053d-e82f-4255-9cbb-cab1f984ceed", 00:26:25.575 "is_configured": false, 00:26:25.575 "data_offset": 0, 00:26:25.575 "data_size": 65536 00:26:25.575 }, 00:26:25.575 { 00:26:25.575 "name": "BaseBdev3", 00:26:25.575 "uuid": "3cbb5612-8c99-415d-b535-e448bca44937", 00:26:25.575 "is_configured": true, 00:26:25.575 "data_offset": 0, 00:26:25.575 "data_size": 65536 00:26:25.575 } 00:26:25.575 ] 00:26:25.575 }' 00:26:25.576 15:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:25.576 15:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.834 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:25.834 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:25.834 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.834 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.834 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.834 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:26:25.834 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:25.834 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.834 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.834 [2024-11-05 15:55:58.137063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:25.834 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.834 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:26:25.834 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:25.834 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:25.834 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:25.834 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:25.834 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:25.834 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:25.834 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:25.834 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:25.834 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:25.834 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:25.834 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:25.834 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.834 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.834 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.834 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:25.834 "name": "Existed_Raid", 00:26:25.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:25.834 "strip_size_kb": 64, 00:26:25.834 "state": "configuring", 00:26:25.834 "raid_level": "concat", 00:26:25.834 "superblock": false, 00:26:25.834 "num_base_bdevs": 3, 00:26:25.834 "num_base_bdevs_discovered": 2, 00:26:25.834 "num_base_bdevs_operational": 3, 00:26:25.834 "base_bdevs_list": [ 00:26:25.834 { 00:26:25.834 "name": null, 00:26:25.834 "uuid": "105b7b6c-e8ba-40bc-b87a-ad903a11122a", 00:26:25.834 "is_configured": false, 00:26:25.834 "data_offset": 0, 00:26:25.834 "data_size": 65536 00:26:25.834 }, 00:26:25.834 { 00:26:25.834 "name": "BaseBdev2", 00:26:25.834 "uuid": "2202053d-e82f-4255-9cbb-cab1f984ceed", 00:26:25.834 "is_configured": true, 00:26:25.834 "data_offset": 0, 00:26:25.834 "data_size": 65536 00:26:25.834 }, 00:26:25.834 { 00:26:25.834 "name": "BaseBdev3", 00:26:25.834 "uuid": "3cbb5612-8c99-415d-b535-e448bca44937", 00:26:25.834 "is_configured": true, 00:26:25.834 "data_offset": 0, 00:26:25.834 "data_size": 65536 00:26:25.834 } 00:26:25.834 ] 00:26:25.834 }' 00:26:25.834 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:25.834 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.092 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:26.092 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.092 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.092 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:26.092 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.092 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:26:26.092 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:26.092 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.092 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.092 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:26.092 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.351 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 105b7b6c-e8ba-40bc-b87a-ad903a11122a 00:26:26.351 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.351 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.351 [2024-11-05 15:55:58.539689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:26.351 [2024-11-05 15:55:58.539719] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:26:26.351 [2024-11-05 15:55:58.539726] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:26:26.351 [2024-11-05 15:55:58.539948] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:26:26.351 [2024-11-05 15:55:58.540055] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:26:26.351 [2024-11-05 15:55:58.540061] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:26:26.351 [2024-11-05 15:55:58.540231] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:26.351 NewBaseBdev 00:26:26.351 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.351 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:26:26.351 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:26:26.351 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:26.351 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:26:26.351 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:26.351 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:26.351 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:26.351 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.351 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.351 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.351 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:26.351 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.351 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.351 [ 00:26:26.351 { 00:26:26.351 "name": "NewBaseBdev", 00:26:26.351 "aliases": [ 00:26:26.351 "105b7b6c-e8ba-40bc-b87a-ad903a11122a" 00:26:26.351 ], 00:26:26.351 "product_name": "Malloc disk", 00:26:26.351 "block_size": 512, 00:26:26.351 "num_blocks": 65536, 00:26:26.351 "uuid": "105b7b6c-e8ba-40bc-b87a-ad903a11122a", 00:26:26.351 "assigned_rate_limits": { 00:26:26.351 "rw_ios_per_sec": 0, 00:26:26.351 "rw_mbytes_per_sec": 0, 00:26:26.351 "r_mbytes_per_sec": 0, 00:26:26.351 "w_mbytes_per_sec": 0 00:26:26.351 }, 00:26:26.351 "claimed": true, 00:26:26.351 "claim_type": "exclusive_write", 00:26:26.351 "zoned": false, 00:26:26.351 "supported_io_types": { 00:26:26.351 "read": true, 00:26:26.351 "write": true, 00:26:26.351 "unmap": true, 00:26:26.351 "flush": true, 00:26:26.351 "reset": true, 00:26:26.351 "nvme_admin": false, 00:26:26.351 "nvme_io": false, 00:26:26.351 "nvme_io_md": false, 00:26:26.351 "write_zeroes": true, 00:26:26.351 "zcopy": true, 00:26:26.351 "get_zone_info": false, 00:26:26.351 "zone_management": false, 00:26:26.351 "zone_append": false, 00:26:26.351 "compare": false, 00:26:26.351 "compare_and_write": false, 00:26:26.351 "abort": true, 00:26:26.351 "seek_hole": false, 00:26:26.351 "seek_data": false, 00:26:26.351 "copy": true, 00:26:26.351 "nvme_iov_md": false 00:26:26.351 }, 00:26:26.351 "memory_domains": [ 00:26:26.351 { 00:26:26.351 "dma_device_id": "system", 00:26:26.351 "dma_device_type": 1 00:26:26.351 }, 00:26:26.351 { 00:26:26.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:26.351 "dma_device_type": 2 00:26:26.351 } 00:26:26.351 ], 00:26:26.351 "driver_specific": {} 00:26:26.351 } 00:26:26.351 ] 00:26:26.351 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.351 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:26:26.352 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:26:26.352 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:26.352 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:26.352 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:26.352 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:26.352 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:26.352 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:26.352 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:26.352 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:26.352 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:26.352 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:26.352 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:26.352 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.352 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.352 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.352 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:26.352 "name": "Existed_Raid", 00:26:26.352 "uuid": "f832d093-40e8-4a53-9f7f-1c764533288a", 00:26:26.352 "strip_size_kb": 64, 00:26:26.352 "state": "online", 00:26:26.352 "raid_level": "concat", 00:26:26.352 "superblock": false, 00:26:26.352 "num_base_bdevs": 3, 00:26:26.352 "num_base_bdevs_discovered": 3, 00:26:26.352 "num_base_bdevs_operational": 3, 00:26:26.352 "base_bdevs_list": [ 00:26:26.352 { 00:26:26.352 "name": "NewBaseBdev", 00:26:26.352 "uuid": "105b7b6c-e8ba-40bc-b87a-ad903a11122a", 00:26:26.352 "is_configured": true, 00:26:26.352 "data_offset": 0, 00:26:26.352 "data_size": 65536 00:26:26.352 }, 00:26:26.352 { 00:26:26.352 "name": "BaseBdev2", 00:26:26.352 "uuid": "2202053d-e82f-4255-9cbb-cab1f984ceed", 00:26:26.352 "is_configured": true, 00:26:26.352 "data_offset": 0, 00:26:26.352 "data_size": 65536 00:26:26.352 }, 00:26:26.352 { 00:26:26.352 "name": "BaseBdev3", 00:26:26.352 "uuid": "3cbb5612-8c99-415d-b535-e448bca44937", 00:26:26.352 "is_configured": true, 00:26:26.352 "data_offset": 0, 00:26:26.352 "data_size": 65536 00:26:26.352 } 00:26:26.352 ] 00:26:26.352 }' 00:26:26.352 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:26.352 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.610 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:26:26.610 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:26.610 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:26.610 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:26.610 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:26.610 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:26.610 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:26.610 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.610 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.610 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:26.610 [2024-11-05 15:55:58.888066] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:26.610 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.610 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:26.610 "name": "Existed_Raid", 00:26:26.610 "aliases": [ 00:26:26.610 "f832d093-40e8-4a53-9f7f-1c764533288a" 00:26:26.610 ], 00:26:26.610 "product_name": "Raid Volume", 00:26:26.610 "block_size": 512, 00:26:26.610 "num_blocks": 196608, 00:26:26.610 "uuid": "f832d093-40e8-4a53-9f7f-1c764533288a", 00:26:26.610 "assigned_rate_limits": { 00:26:26.610 "rw_ios_per_sec": 0, 00:26:26.610 "rw_mbytes_per_sec": 0, 00:26:26.610 "r_mbytes_per_sec": 0, 00:26:26.610 "w_mbytes_per_sec": 0 00:26:26.610 }, 00:26:26.610 "claimed": false, 00:26:26.610 "zoned": false, 00:26:26.610 "supported_io_types": { 00:26:26.610 "read": true, 00:26:26.610 "write": true, 00:26:26.610 "unmap": true, 00:26:26.610 "flush": true, 00:26:26.610 "reset": true, 00:26:26.610 "nvme_admin": false, 00:26:26.610 "nvme_io": false, 00:26:26.610 "nvme_io_md": false, 00:26:26.610 "write_zeroes": true, 00:26:26.610 "zcopy": false, 00:26:26.610 "get_zone_info": false, 00:26:26.610 "zone_management": false, 00:26:26.610 "zone_append": false, 00:26:26.610 "compare": false, 00:26:26.610 "compare_and_write": false, 00:26:26.610 "abort": false, 00:26:26.610 "seek_hole": false, 00:26:26.610 "seek_data": false, 00:26:26.610 "copy": false, 00:26:26.610 "nvme_iov_md": false 00:26:26.610 }, 00:26:26.610 "memory_domains": [ 00:26:26.610 { 00:26:26.610 "dma_device_id": "system", 00:26:26.610 "dma_device_type": 1 00:26:26.610 }, 00:26:26.610 { 00:26:26.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:26.611 "dma_device_type": 2 00:26:26.611 }, 00:26:26.611 { 00:26:26.611 "dma_device_id": "system", 00:26:26.611 "dma_device_type": 1 00:26:26.611 }, 00:26:26.611 { 00:26:26.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:26.611 "dma_device_type": 2 00:26:26.611 }, 00:26:26.611 { 00:26:26.611 "dma_device_id": "system", 00:26:26.611 "dma_device_type": 1 00:26:26.611 }, 00:26:26.611 { 00:26:26.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:26.611 "dma_device_type": 2 00:26:26.611 } 00:26:26.611 ], 00:26:26.611 "driver_specific": { 00:26:26.611 "raid": { 00:26:26.611 "uuid": "f832d093-40e8-4a53-9f7f-1c764533288a", 00:26:26.611 "strip_size_kb": 64, 00:26:26.611 "state": "online", 00:26:26.611 "raid_level": "concat", 00:26:26.611 "superblock": false, 00:26:26.611 "num_base_bdevs": 3, 00:26:26.611 "num_base_bdevs_discovered": 3, 00:26:26.611 "num_base_bdevs_operational": 3, 00:26:26.611 "base_bdevs_list": [ 00:26:26.611 { 00:26:26.611 "name": "NewBaseBdev", 00:26:26.611 "uuid": "105b7b6c-e8ba-40bc-b87a-ad903a11122a", 00:26:26.611 "is_configured": true, 00:26:26.611 "data_offset": 0, 00:26:26.611 "data_size": 65536 00:26:26.611 }, 00:26:26.611 { 00:26:26.611 "name": "BaseBdev2", 00:26:26.611 "uuid": "2202053d-e82f-4255-9cbb-cab1f984ceed", 00:26:26.611 "is_configured": true, 00:26:26.611 "data_offset": 0, 00:26:26.611 "data_size": 65536 00:26:26.611 }, 00:26:26.611 { 00:26:26.611 "name": "BaseBdev3", 00:26:26.611 "uuid": "3cbb5612-8c99-415d-b535-e448bca44937", 00:26:26.611 "is_configured": true, 00:26:26.611 "data_offset": 0, 00:26:26.611 "data_size": 65536 00:26:26.611 } 00:26:26.611 ] 00:26:26.611 } 00:26:26.611 } 00:26:26.611 }' 00:26:26.611 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:26.611 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:26:26.611 BaseBdev2 00:26:26.611 BaseBdev3' 00:26:26.611 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:26.611 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:26.611 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:26.611 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:26:26.611 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.611 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.611 15:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:26.611 15:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.611 15:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:26.611 15:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:26.611 15:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:26.611 15:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:26.611 15:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.611 15:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.611 15:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:26.611 15:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.869 15:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:26.869 15:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:26.869 15:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:26.869 15:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:26.869 15:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:26:26.869 15:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.869 15:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.869 15:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.869 15:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:26.869 15:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:26.869 15:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:26.869 15:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.869 15:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.869 [2024-11-05 15:55:59.075832] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:26.869 [2024-11-05 15:55:59.075862] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:26.869 [2024-11-05 15:55:59.075918] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:26.869 [2024-11-05 15:55:59.075965] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:26.869 [2024-11-05 15:55:59.075981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:26:26.869 15:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.869 15:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63916 00:26:26.869 15:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 63916 ']' 00:26:26.869 15:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 63916 00:26:26.869 15:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:26:26.869 15:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:26.869 15:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63916 00:26:26.869 killing process with pid 63916 00:26:26.869 15:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:26.869 15:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:26.869 15:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63916' 00:26:26.869 15:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 63916 00:26:26.869 [2024-11-05 15:55:59.104194] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:26.869 15:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 63916 00:26:26.869 [2024-11-05 15:55:59.250083] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:27.438 15:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:26:27.438 00:26:27.438 real 0m7.393s 00:26:27.438 user 0m11.968s 00:26:27.438 sys 0m1.195s 00:26:27.438 15:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:27.438 ************************************ 00:26:27.438 END TEST raid_state_function_test 00:26:27.438 ************************************ 00:26:27.438 15:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:27.438 15:55:59 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:26:27.438 15:55:59 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:26:27.438 15:55:59 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:27.438 15:55:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:27.697 ************************************ 00:26:27.697 START TEST raid_state_function_test_sb 00:26:27.697 ************************************ 00:26:27.697 15:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 true 00:26:27.697 15:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:26:27.697 15:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:26:27.697 15:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:26:27.697 15:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:26:27.697 15:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:26:27.697 15:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:27.697 15:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:26:27.697 15:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:27.697 15:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:27.697 15:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:26:27.697 15:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:27.697 15:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:27.697 15:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:26:27.697 15:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:27.697 15:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:27.697 15:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:26:27.697 15:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:26:27.697 15:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:26:27.697 Process raid pid: 64507 00:26:27.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:27.697 15:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:26:27.697 15:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:26:27.697 15:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:26:27.697 15:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:26:27.697 15:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:26:27.697 15:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:26:27.697 15:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:26:27.697 15:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:26:27.697 15:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64507 00:26:27.697 15:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64507' 00:26:27.697 15:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64507 00:26:27.697 15:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 64507 ']' 00:26:27.697 15:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.697 15:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:26:27.697 15:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:27.697 15:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:27.697 15:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:27.697 15:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:27.697 [2024-11-05 15:55:59.922343] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:26:27.697 [2024-11-05 15:55:59.922586] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:27.697 [2024-11-05 15:56:00.080187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.955 [2024-11-05 15:56:00.165600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:27.955 [2024-11-05 15:56:00.276218] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:27.955 [2024-11-05 15:56:00.276251] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:28.520 15:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:28.520 15:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:26:28.520 15:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:28.520 15:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.520 15:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:28.520 [2024-11-05 15:56:00.775488] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:28.520 [2024-11-05 15:56:00.775535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:28.520 [2024-11-05 15:56:00.775544] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:28.520 [2024-11-05 15:56:00.775552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:28.520 [2024-11-05 15:56:00.775558] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:28.520 [2024-11-05 15:56:00.775566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:28.520 15:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.520 15:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:26:28.520 15:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:28.520 15:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:28.520 15:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:28.520 15:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:28.520 15:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:28.520 15:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:28.520 15:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:28.520 15:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:28.520 15:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:28.520 15:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:28.520 15:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.520 15:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:28.520 15:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:28.520 15:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.521 15:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:28.521 "name": "Existed_Raid", 00:26:28.521 "uuid": "7bbed3f9-e621-485a-a80d-1de38b187c5a", 00:26:28.521 "strip_size_kb": 64, 00:26:28.521 "state": "configuring", 00:26:28.521 "raid_level": "concat", 00:26:28.521 "superblock": true, 00:26:28.521 "num_base_bdevs": 3, 00:26:28.521 "num_base_bdevs_discovered": 0, 00:26:28.521 "num_base_bdevs_operational": 3, 00:26:28.521 "base_bdevs_list": [ 00:26:28.521 { 00:26:28.521 "name": "BaseBdev1", 00:26:28.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:28.521 "is_configured": false, 00:26:28.521 "data_offset": 0, 00:26:28.521 "data_size": 0 00:26:28.521 }, 00:26:28.521 { 00:26:28.521 "name": "BaseBdev2", 00:26:28.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:28.521 "is_configured": false, 00:26:28.521 "data_offset": 0, 00:26:28.521 "data_size": 0 00:26:28.521 }, 00:26:28.521 { 00:26:28.521 "name": "BaseBdev3", 00:26:28.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:28.521 "is_configured": false, 00:26:28.521 "data_offset": 0, 00:26:28.521 "data_size": 0 00:26:28.521 } 00:26:28.521 ] 00:26:28.521 }' 00:26:28.521 15:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:28.521 15:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:28.779 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:28.779 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.779 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:28.779 [2024-11-05 15:56:01.123503] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:28.779 [2024-11-05 15:56:01.123677] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:26:28.779 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.779 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:28.779 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.779 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:28.779 [2024-11-05 15:56:01.131507] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:28.779 [2024-11-05 15:56:01.131609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:28.779 [2024-11-05 15:56:01.131656] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:28.779 [2024-11-05 15:56:01.131677] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:28.779 [2024-11-05 15:56:01.131720] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:28.779 [2024-11-05 15:56:01.131741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:28.779 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.779 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:28.779 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.779 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:28.779 [2024-11-05 15:56:01.159245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:28.779 BaseBdev1 00:26:28.779 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.779 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:26:28.779 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:26:28.779 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:28.779 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:26:28.779 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:28.779 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:28.779 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:28.779 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.779 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:28.779 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.779 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:28.779 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.779 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:28.779 [ 00:26:28.779 { 00:26:28.779 "name": "BaseBdev1", 00:26:28.779 "aliases": [ 00:26:28.779 "c7ed7382-e380-4223-809c-a5e001a15694" 00:26:28.779 ], 00:26:28.779 "product_name": "Malloc disk", 00:26:28.779 "block_size": 512, 00:26:28.779 "num_blocks": 65536, 00:26:28.779 "uuid": "c7ed7382-e380-4223-809c-a5e001a15694", 00:26:28.779 "assigned_rate_limits": { 00:26:28.779 "rw_ios_per_sec": 0, 00:26:28.779 "rw_mbytes_per_sec": 0, 00:26:28.779 "r_mbytes_per_sec": 0, 00:26:28.779 "w_mbytes_per_sec": 0 00:26:28.779 }, 00:26:28.779 "claimed": true, 00:26:28.779 "claim_type": "exclusive_write", 00:26:28.779 "zoned": false, 00:26:28.779 "supported_io_types": { 00:26:28.779 "read": true, 00:26:28.779 "write": true, 00:26:28.779 "unmap": true, 00:26:28.779 "flush": true, 00:26:28.779 "reset": true, 00:26:28.779 "nvme_admin": false, 00:26:28.779 "nvme_io": false, 00:26:28.779 "nvme_io_md": false, 00:26:28.779 "write_zeroes": true, 00:26:28.779 "zcopy": true, 00:26:28.779 "get_zone_info": false, 00:26:28.779 "zone_management": false, 00:26:28.779 "zone_append": false, 00:26:28.779 "compare": false, 00:26:28.779 "compare_and_write": false, 00:26:28.779 "abort": true, 00:26:28.780 "seek_hole": false, 00:26:28.780 "seek_data": false, 00:26:28.780 "copy": true, 00:26:28.780 "nvme_iov_md": false 00:26:28.780 }, 00:26:28.780 "memory_domains": [ 00:26:28.780 { 00:26:28.780 "dma_device_id": "system", 00:26:28.780 "dma_device_type": 1 00:26:28.780 }, 00:26:28.780 { 00:26:28.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:28.780 "dma_device_type": 2 00:26:28.780 } 00:26:28.780 ], 00:26:28.780 "driver_specific": {} 00:26:28.780 } 00:26:28.780 ] 00:26:28.780 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.780 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:26:28.780 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:26:28.780 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:28.780 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:28.780 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:28.780 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:28.780 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:28.780 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:28.780 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:28.780 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:28.780 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:28.780 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:28.780 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.780 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:28.780 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:29.081 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.081 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:29.082 "name": "Existed_Raid", 00:26:29.082 "uuid": "58fb4b14-4c2e-4d53-bf8f-b2b137e550a4", 00:26:29.082 "strip_size_kb": 64, 00:26:29.082 "state": "configuring", 00:26:29.082 "raid_level": "concat", 00:26:29.082 "superblock": true, 00:26:29.082 "num_base_bdevs": 3, 00:26:29.082 "num_base_bdevs_discovered": 1, 00:26:29.082 "num_base_bdevs_operational": 3, 00:26:29.082 "base_bdevs_list": [ 00:26:29.082 { 00:26:29.082 "name": "BaseBdev1", 00:26:29.082 "uuid": "c7ed7382-e380-4223-809c-a5e001a15694", 00:26:29.082 "is_configured": true, 00:26:29.082 "data_offset": 2048, 00:26:29.082 "data_size": 63488 00:26:29.082 }, 00:26:29.082 { 00:26:29.082 "name": "BaseBdev2", 00:26:29.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:29.082 "is_configured": false, 00:26:29.082 "data_offset": 0, 00:26:29.082 "data_size": 0 00:26:29.082 }, 00:26:29.082 { 00:26:29.082 "name": "BaseBdev3", 00:26:29.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:29.082 "is_configured": false, 00:26:29.082 "data_offset": 0, 00:26:29.082 "data_size": 0 00:26:29.082 } 00:26:29.082 ] 00:26:29.082 }' 00:26:29.082 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:29.082 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.082 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:29.082 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.082 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.082 [2024-11-05 15:56:01.487345] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:29.082 [2024-11-05 15:56:01.487482] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:26:29.082 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.082 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:29.082 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.082 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.082 [2024-11-05 15:56:01.495384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:29.082 [2024-11-05 15:56:01.496968] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:29.082 [2024-11-05 15:56:01.497068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:29.082 [2024-11-05 15:56:01.497116] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:29.082 [2024-11-05 15:56:01.497137] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:29.339 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.339 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:26:29.339 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:29.339 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:26:29.339 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:29.339 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:29.339 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:29.339 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:29.339 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:29.339 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:29.339 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:29.339 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:29.339 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:29.339 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:29.339 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.339 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:29.339 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.339 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.339 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:29.339 "name": "Existed_Raid", 00:26:29.339 "uuid": "588d30cd-1c1c-4736-8c20-870915dba01a", 00:26:29.339 "strip_size_kb": 64, 00:26:29.339 "state": "configuring", 00:26:29.339 "raid_level": "concat", 00:26:29.339 "superblock": true, 00:26:29.339 "num_base_bdevs": 3, 00:26:29.339 "num_base_bdevs_discovered": 1, 00:26:29.339 "num_base_bdevs_operational": 3, 00:26:29.339 "base_bdevs_list": [ 00:26:29.339 { 00:26:29.339 "name": "BaseBdev1", 00:26:29.339 "uuid": "c7ed7382-e380-4223-809c-a5e001a15694", 00:26:29.339 "is_configured": true, 00:26:29.339 "data_offset": 2048, 00:26:29.339 "data_size": 63488 00:26:29.339 }, 00:26:29.339 { 00:26:29.339 "name": "BaseBdev2", 00:26:29.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:29.339 "is_configured": false, 00:26:29.339 "data_offset": 0, 00:26:29.339 "data_size": 0 00:26:29.339 }, 00:26:29.339 { 00:26:29.339 "name": "BaseBdev3", 00:26:29.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:29.339 "is_configured": false, 00:26:29.339 "data_offset": 0, 00:26:29.339 "data_size": 0 00:26:29.339 } 00:26:29.339 ] 00:26:29.339 }' 00:26:29.339 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:29.339 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.598 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:29.598 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.598 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.598 [2024-11-05 15:56:01.821691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:29.598 BaseBdev2 00:26:29.598 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.598 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:26:29.598 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:26:29.598 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:29.598 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:26:29.598 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:29.598 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:29.598 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:29.598 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.598 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.598 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.598 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:29.598 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.598 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.598 [ 00:26:29.598 { 00:26:29.598 "name": "BaseBdev2", 00:26:29.598 "aliases": [ 00:26:29.598 "631ce670-4af4-47f1-97f3-3c9090f7599d" 00:26:29.598 ], 00:26:29.598 "product_name": "Malloc disk", 00:26:29.598 "block_size": 512, 00:26:29.598 "num_blocks": 65536, 00:26:29.598 "uuid": "631ce670-4af4-47f1-97f3-3c9090f7599d", 00:26:29.598 "assigned_rate_limits": { 00:26:29.598 "rw_ios_per_sec": 0, 00:26:29.598 "rw_mbytes_per_sec": 0, 00:26:29.598 "r_mbytes_per_sec": 0, 00:26:29.598 "w_mbytes_per_sec": 0 00:26:29.598 }, 00:26:29.598 "claimed": true, 00:26:29.598 "claim_type": "exclusive_write", 00:26:29.598 "zoned": false, 00:26:29.598 "supported_io_types": { 00:26:29.598 "read": true, 00:26:29.598 "write": true, 00:26:29.598 "unmap": true, 00:26:29.598 "flush": true, 00:26:29.598 "reset": true, 00:26:29.598 "nvme_admin": false, 00:26:29.598 "nvme_io": false, 00:26:29.598 "nvme_io_md": false, 00:26:29.598 "write_zeroes": true, 00:26:29.598 "zcopy": true, 00:26:29.598 "get_zone_info": false, 00:26:29.598 "zone_management": false, 00:26:29.598 "zone_append": false, 00:26:29.598 "compare": false, 00:26:29.598 "compare_and_write": false, 00:26:29.598 "abort": true, 00:26:29.598 "seek_hole": false, 00:26:29.598 "seek_data": false, 00:26:29.598 "copy": true, 00:26:29.598 "nvme_iov_md": false 00:26:29.598 }, 00:26:29.598 "memory_domains": [ 00:26:29.598 { 00:26:29.598 "dma_device_id": "system", 00:26:29.598 "dma_device_type": 1 00:26:29.598 }, 00:26:29.598 { 00:26:29.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:29.598 "dma_device_type": 2 00:26:29.598 } 00:26:29.598 ], 00:26:29.598 "driver_specific": {} 00:26:29.598 } 00:26:29.598 ] 00:26:29.598 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.598 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:26:29.598 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:29.598 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:29.598 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:26:29.598 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:29.598 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:29.598 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:29.598 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:29.598 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:29.598 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:29.598 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:29.598 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:29.598 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:29.598 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:29.598 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:29.598 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.598 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.598 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.598 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:29.598 "name": "Existed_Raid", 00:26:29.598 "uuid": "588d30cd-1c1c-4736-8c20-870915dba01a", 00:26:29.598 "strip_size_kb": 64, 00:26:29.598 "state": "configuring", 00:26:29.598 "raid_level": "concat", 00:26:29.598 "superblock": true, 00:26:29.598 "num_base_bdevs": 3, 00:26:29.598 "num_base_bdevs_discovered": 2, 00:26:29.598 "num_base_bdevs_operational": 3, 00:26:29.598 "base_bdevs_list": [ 00:26:29.598 { 00:26:29.598 "name": "BaseBdev1", 00:26:29.598 "uuid": "c7ed7382-e380-4223-809c-a5e001a15694", 00:26:29.598 "is_configured": true, 00:26:29.598 "data_offset": 2048, 00:26:29.598 "data_size": 63488 00:26:29.598 }, 00:26:29.598 { 00:26:29.598 "name": "BaseBdev2", 00:26:29.598 "uuid": "631ce670-4af4-47f1-97f3-3c9090f7599d", 00:26:29.598 "is_configured": true, 00:26:29.598 "data_offset": 2048, 00:26:29.598 "data_size": 63488 00:26:29.599 }, 00:26:29.599 { 00:26:29.599 "name": "BaseBdev3", 00:26:29.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:29.599 "is_configured": false, 00:26:29.599 "data_offset": 0, 00:26:29.599 "data_size": 0 00:26:29.599 } 00:26:29.599 ] 00:26:29.599 }' 00:26:29.599 15:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:29.599 15:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.857 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:26:29.857 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.857 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.857 [2024-11-05 15:56:02.193927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:29.857 [2024-11-05 15:56:02.194107] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:29.857 [2024-11-05 15:56:02.194124] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:29.857 [2024-11-05 15:56:02.194342] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:26:29.857 BaseBdev3 00:26:29.857 [2024-11-05 15:56:02.194449] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:29.857 [2024-11-05 15:56:02.194456] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:26:29.857 [2024-11-05 15:56:02.194559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:29.857 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.857 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:26:29.857 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:26:29.857 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:29.857 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:26:29.857 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:29.857 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:29.857 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:29.857 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.857 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.857 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.857 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:29.857 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.857 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.857 [ 00:26:29.857 { 00:26:29.857 "name": "BaseBdev3", 00:26:29.857 "aliases": [ 00:26:29.857 "b69fa163-fa3c-4f67-8d7a-2b190e029d22" 00:26:29.857 ], 00:26:29.857 "product_name": "Malloc disk", 00:26:29.857 "block_size": 512, 00:26:29.857 "num_blocks": 65536, 00:26:29.857 "uuid": "b69fa163-fa3c-4f67-8d7a-2b190e029d22", 00:26:29.857 "assigned_rate_limits": { 00:26:29.857 "rw_ios_per_sec": 0, 00:26:29.857 "rw_mbytes_per_sec": 0, 00:26:29.857 "r_mbytes_per_sec": 0, 00:26:29.857 "w_mbytes_per_sec": 0 00:26:29.857 }, 00:26:29.857 "claimed": true, 00:26:29.857 "claim_type": "exclusive_write", 00:26:29.857 "zoned": false, 00:26:29.857 "supported_io_types": { 00:26:29.857 "read": true, 00:26:29.857 "write": true, 00:26:29.857 "unmap": true, 00:26:29.857 "flush": true, 00:26:29.857 "reset": true, 00:26:29.857 "nvme_admin": false, 00:26:29.857 "nvme_io": false, 00:26:29.857 "nvme_io_md": false, 00:26:29.857 "write_zeroes": true, 00:26:29.857 "zcopy": true, 00:26:29.857 "get_zone_info": false, 00:26:29.857 "zone_management": false, 00:26:29.857 "zone_append": false, 00:26:29.857 "compare": false, 00:26:29.857 "compare_and_write": false, 00:26:29.857 "abort": true, 00:26:29.857 "seek_hole": false, 00:26:29.857 "seek_data": false, 00:26:29.857 "copy": true, 00:26:29.857 "nvme_iov_md": false 00:26:29.857 }, 00:26:29.857 "memory_domains": [ 00:26:29.857 { 00:26:29.857 "dma_device_id": "system", 00:26:29.857 "dma_device_type": 1 00:26:29.857 }, 00:26:29.857 { 00:26:29.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:29.857 "dma_device_type": 2 00:26:29.857 } 00:26:29.857 ], 00:26:29.857 "driver_specific": {} 00:26:29.857 } 00:26:29.857 ] 00:26:29.857 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.857 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:26:29.857 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:29.857 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:29.857 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:26:29.857 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:29.857 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:29.857 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:29.857 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:29.857 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:29.857 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:29.858 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:29.858 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:29.858 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:29.858 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:29.858 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.858 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:29.858 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.858 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.858 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:29.858 "name": "Existed_Raid", 00:26:29.858 "uuid": "588d30cd-1c1c-4736-8c20-870915dba01a", 00:26:29.858 "strip_size_kb": 64, 00:26:29.858 "state": "online", 00:26:29.858 "raid_level": "concat", 00:26:29.858 "superblock": true, 00:26:29.858 "num_base_bdevs": 3, 00:26:29.858 "num_base_bdevs_discovered": 3, 00:26:29.858 "num_base_bdevs_operational": 3, 00:26:29.858 "base_bdevs_list": [ 00:26:29.858 { 00:26:29.858 "name": "BaseBdev1", 00:26:29.858 "uuid": "c7ed7382-e380-4223-809c-a5e001a15694", 00:26:29.858 "is_configured": true, 00:26:29.858 "data_offset": 2048, 00:26:29.858 "data_size": 63488 00:26:29.858 }, 00:26:29.858 { 00:26:29.858 "name": "BaseBdev2", 00:26:29.858 "uuid": "631ce670-4af4-47f1-97f3-3c9090f7599d", 00:26:29.858 "is_configured": true, 00:26:29.858 "data_offset": 2048, 00:26:29.858 "data_size": 63488 00:26:29.858 }, 00:26:29.858 { 00:26:29.858 "name": "BaseBdev3", 00:26:29.858 "uuid": "b69fa163-fa3c-4f67-8d7a-2b190e029d22", 00:26:29.858 "is_configured": true, 00:26:29.858 "data_offset": 2048, 00:26:29.858 "data_size": 63488 00:26:29.858 } 00:26:29.858 ] 00:26:29.858 }' 00:26:29.858 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:29.858 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:30.115 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:26:30.115 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:30.115 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:30.115 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:30.115 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:26:30.115 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:30.375 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:30.375 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:30.375 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.375 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:30.375 [2024-11-05 15:56:02.538303] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:30.375 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.375 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:30.375 "name": "Existed_Raid", 00:26:30.375 "aliases": [ 00:26:30.375 "588d30cd-1c1c-4736-8c20-870915dba01a" 00:26:30.375 ], 00:26:30.375 "product_name": "Raid Volume", 00:26:30.375 "block_size": 512, 00:26:30.375 "num_blocks": 190464, 00:26:30.375 "uuid": "588d30cd-1c1c-4736-8c20-870915dba01a", 00:26:30.375 "assigned_rate_limits": { 00:26:30.375 "rw_ios_per_sec": 0, 00:26:30.375 "rw_mbytes_per_sec": 0, 00:26:30.375 "r_mbytes_per_sec": 0, 00:26:30.375 "w_mbytes_per_sec": 0 00:26:30.375 }, 00:26:30.375 "claimed": false, 00:26:30.375 "zoned": false, 00:26:30.375 "supported_io_types": { 00:26:30.375 "read": true, 00:26:30.375 "write": true, 00:26:30.375 "unmap": true, 00:26:30.375 "flush": true, 00:26:30.375 "reset": true, 00:26:30.375 "nvme_admin": false, 00:26:30.375 "nvme_io": false, 00:26:30.375 "nvme_io_md": false, 00:26:30.375 "write_zeroes": true, 00:26:30.375 "zcopy": false, 00:26:30.375 "get_zone_info": false, 00:26:30.375 "zone_management": false, 00:26:30.375 "zone_append": false, 00:26:30.375 "compare": false, 00:26:30.375 "compare_and_write": false, 00:26:30.375 "abort": false, 00:26:30.375 "seek_hole": false, 00:26:30.375 "seek_data": false, 00:26:30.375 "copy": false, 00:26:30.375 "nvme_iov_md": false 00:26:30.375 }, 00:26:30.375 "memory_domains": [ 00:26:30.375 { 00:26:30.375 "dma_device_id": "system", 00:26:30.375 "dma_device_type": 1 00:26:30.375 }, 00:26:30.375 { 00:26:30.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:30.375 "dma_device_type": 2 00:26:30.375 }, 00:26:30.375 { 00:26:30.375 "dma_device_id": "system", 00:26:30.375 "dma_device_type": 1 00:26:30.375 }, 00:26:30.375 { 00:26:30.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:30.375 "dma_device_type": 2 00:26:30.375 }, 00:26:30.375 { 00:26:30.375 "dma_device_id": "system", 00:26:30.375 "dma_device_type": 1 00:26:30.375 }, 00:26:30.375 { 00:26:30.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:30.375 "dma_device_type": 2 00:26:30.375 } 00:26:30.375 ], 00:26:30.375 "driver_specific": { 00:26:30.375 "raid": { 00:26:30.375 "uuid": "588d30cd-1c1c-4736-8c20-870915dba01a", 00:26:30.375 "strip_size_kb": 64, 00:26:30.375 "state": "online", 00:26:30.375 "raid_level": "concat", 00:26:30.375 "superblock": true, 00:26:30.375 "num_base_bdevs": 3, 00:26:30.375 "num_base_bdevs_discovered": 3, 00:26:30.375 "num_base_bdevs_operational": 3, 00:26:30.375 "base_bdevs_list": [ 00:26:30.375 { 00:26:30.375 "name": "BaseBdev1", 00:26:30.375 "uuid": "c7ed7382-e380-4223-809c-a5e001a15694", 00:26:30.375 "is_configured": true, 00:26:30.375 "data_offset": 2048, 00:26:30.375 "data_size": 63488 00:26:30.375 }, 00:26:30.375 { 00:26:30.375 "name": "BaseBdev2", 00:26:30.375 "uuid": "631ce670-4af4-47f1-97f3-3c9090f7599d", 00:26:30.375 "is_configured": true, 00:26:30.375 "data_offset": 2048, 00:26:30.375 "data_size": 63488 00:26:30.375 }, 00:26:30.375 { 00:26:30.375 "name": "BaseBdev3", 00:26:30.375 "uuid": "b69fa163-fa3c-4f67-8d7a-2b190e029d22", 00:26:30.375 "is_configured": true, 00:26:30.375 "data_offset": 2048, 00:26:30.375 "data_size": 63488 00:26:30.375 } 00:26:30.375 ] 00:26:30.375 } 00:26:30.375 } 00:26:30.375 }' 00:26:30.375 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:30.375 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:26:30.375 BaseBdev2 00:26:30.375 BaseBdev3' 00:26:30.375 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:30.375 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:30.375 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:30.375 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:26:30.375 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:30.375 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.375 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:30.375 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.375 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:30.375 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:30.375 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:30.375 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:30.375 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:30.375 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.375 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:30.375 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.375 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:30.375 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:30.375 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:30.375 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:30.375 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:26:30.375 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.375 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:30.375 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.375 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:30.375 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:30.375 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:30.375 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.375 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:30.375 [2024-11-05 15:56:02.726111] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:30.376 [2024-11-05 15:56:02.726133] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:30.376 [2024-11-05 15:56:02.726172] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:30.376 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.376 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:26:30.376 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:26:30.376 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:30.376 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:26:30.376 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:26:30.376 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:26:30.376 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:30.376 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:26:30.376 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:30.376 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:30.376 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:30.376 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:30.376 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:30.376 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:30.376 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:30.376 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:30.376 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:30.376 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.376 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:30.635 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.635 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:30.635 "name": "Existed_Raid", 00:26:30.635 "uuid": "588d30cd-1c1c-4736-8c20-870915dba01a", 00:26:30.635 "strip_size_kb": 64, 00:26:30.635 "state": "offline", 00:26:30.635 "raid_level": "concat", 00:26:30.635 "superblock": true, 00:26:30.635 "num_base_bdevs": 3, 00:26:30.635 "num_base_bdevs_discovered": 2, 00:26:30.635 "num_base_bdevs_operational": 2, 00:26:30.635 "base_bdevs_list": [ 00:26:30.635 { 00:26:30.635 "name": null, 00:26:30.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:30.635 "is_configured": false, 00:26:30.635 "data_offset": 0, 00:26:30.635 "data_size": 63488 00:26:30.635 }, 00:26:30.635 { 00:26:30.635 "name": "BaseBdev2", 00:26:30.635 "uuid": "631ce670-4af4-47f1-97f3-3c9090f7599d", 00:26:30.635 "is_configured": true, 00:26:30.635 "data_offset": 2048, 00:26:30.635 "data_size": 63488 00:26:30.635 }, 00:26:30.635 { 00:26:30.635 "name": "BaseBdev3", 00:26:30.635 "uuid": "b69fa163-fa3c-4f67-8d7a-2b190e029d22", 00:26:30.635 "is_configured": true, 00:26:30.635 "data_offset": 2048, 00:26:30.635 "data_size": 63488 00:26:30.635 } 00:26:30.635 ] 00:26:30.635 }' 00:26:30.635 15:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:30.635 15:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:30.902 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:26:30.902 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:30.902 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:30.902 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:30.902 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.902 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:30.902 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.902 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:30.902 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:30.902 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:26:30.902 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.902 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:30.903 [2024-11-05 15:56:03.152058] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:30.903 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.903 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:30.903 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:30.903 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:30.903 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.903 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:30.903 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:30.903 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.903 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:30.903 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:30.903 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:26:30.903 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.903 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:30.903 [2024-11-05 15:56:03.241836] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:30.903 [2024-11-05 15:56:03.241885] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:26:30.903 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.903 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:30.903 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:30.903 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:30.903 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.903 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:30.903 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:26:30.903 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.190 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:26:31.190 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:26:31.190 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:26:31.190 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:26:31.190 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:31.190 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:31.190 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.190 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:31.190 BaseBdev2 00:26:31.190 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.190 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:26:31.190 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:26:31.190 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:31.190 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:26:31.190 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:31.190 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:31.190 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:31.190 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.190 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:31.190 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.190 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:31.190 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.190 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:31.190 [ 00:26:31.190 { 00:26:31.190 "name": "BaseBdev2", 00:26:31.190 "aliases": [ 00:26:31.190 "a0e1a8b8-1dc3-4242-bae0-1469e4cd0a2e" 00:26:31.190 ], 00:26:31.190 "product_name": "Malloc disk", 00:26:31.190 "block_size": 512, 00:26:31.190 "num_blocks": 65536, 00:26:31.190 "uuid": "a0e1a8b8-1dc3-4242-bae0-1469e4cd0a2e", 00:26:31.190 "assigned_rate_limits": { 00:26:31.190 "rw_ios_per_sec": 0, 00:26:31.190 "rw_mbytes_per_sec": 0, 00:26:31.190 "r_mbytes_per_sec": 0, 00:26:31.190 "w_mbytes_per_sec": 0 00:26:31.190 }, 00:26:31.191 "claimed": false, 00:26:31.191 "zoned": false, 00:26:31.191 "supported_io_types": { 00:26:31.191 "read": true, 00:26:31.191 "write": true, 00:26:31.191 "unmap": true, 00:26:31.191 "flush": true, 00:26:31.191 "reset": true, 00:26:31.191 "nvme_admin": false, 00:26:31.191 "nvme_io": false, 00:26:31.191 "nvme_io_md": false, 00:26:31.191 "write_zeroes": true, 00:26:31.191 "zcopy": true, 00:26:31.191 "get_zone_info": false, 00:26:31.191 "zone_management": false, 00:26:31.191 "zone_append": false, 00:26:31.191 "compare": false, 00:26:31.191 "compare_and_write": false, 00:26:31.191 "abort": true, 00:26:31.191 "seek_hole": false, 00:26:31.191 "seek_data": false, 00:26:31.191 "copy": true, 00:26:31.191 "nvme_iov_md": false 00:26:31.191 }, 00:26:31.191 "memory_domains": [ 00:26:31.191 { 00:26:31.191 "dma_device_id": "system", 00:26:31.191 "dma_device_type": 1 00:26:31.191 }, 00:26:31.191 { 00:26:31.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:31.191 "dma_device_type": 2 00:26:31.191 } 00:26:31.191 ], 00:26:31.191 "driver_specific": {} 00:26:31.191 } 00:26:31.191 ] 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:31.191 BaseBdev3 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:31.191 [ 00:26:31.191 { 00:26:31.191 "name": "BaseBdev3", 00:26:31.191 "aliases": [ 00:26:31.191 "c364080c-9345-4008-b02b-989ce9debd95" 00:26:31.191 ], 00:26:31.191 "product_name": "Malloc disk", 00:26:31.191 "block_size": 512, 00:26:31.191 "num_blocks": 65536, 00:26:31.191 "uuid": "c364080c-9345-4008-b02b-989ce9debd95", 00:26:31.191 "assigned_rate_limits": { 00:26:31.191 "rw_ios_per_sec": 0, 00:26:31.191 "rw_mbytes_per_sec": 0, 00:26:31.191 "r_mbytes_per_sec": 0, 00:26:31.191 "w_mbytes_per_sec": 0 00:26:31.191 }, 00:26:31.191 "claimed": false, 00:26:31.191 "zoned": false, 00:26:31.191 "supported_io_types": { 00:26:31.191 "read": true, 00:26:31.191 "write": true, 00:26:31.191 "unmap": true, 00:26:31.191 "flush": true, 00:26:31.191 "reset": true, 00:26:31.191 "nvme_admin": false, 00:26:31.191 "nvme_io": false, 00:26:31.191 "nvme_io_md": false, 00:26:31.191 "write_zeroes": true, 00:26:31.191 "zcopy": true, 00:26:31.191 "get_zone_info": false, 00:26:31.191 "zone_management": false, 00:26:31.191 "zone_append": false, 00:26:31.191 "compare": false, 00:26:31.191 "compare_and_write": false, 00:26:31.191 "abort": true, 00:26:31.191 "seek_hole": false, 00:26:31.191 "seek_data": false, 00:26:31.191 "copy": true, 00:26:31.191 "nvme_iov_md": false 00:26:31.191 }, 00:26:31.191 "memory_domains": [ 00:26:31.191 { 00:26:31.191 "dma_device_id": "system", 00:26:31.191 "dma_device_type": 1 00:26:31.191 }, 00:26:31.191 { 00:26:31.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:31.191 "dma_device_type": 2 00:26:31.191 } 00:26:31.191 ], 00:26:31.191 "driver_specific": {} 00:26:31.191 } 00:26:31.191 ] 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:31.191 [2024-11-05 15:56:03.435284] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:31.191 [2024-11-05 15:56:03.435404] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:31.191 [2024-11-05 15:56:03.435470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:31.191 [2024-11-05 15:56:03.436957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:31.191 "name": "Existed_Raid", 00:26:31.191 "uuid": "7cc362a4-0b94-45cb-8de6-6b40510af162", 00:26:31.191 "strip_size_kb": 64, 00:26:31.191 "state": "configuring", 00:26:31.191 "raid_level": "concat", 00:26:31.191 "superblock": true, 00:26:31.191 "num_base_bdevs": 3, 00:26:31.191 "num_base_bdevs_discovered": 2, 00:26:31.191 "num_base_bdevs_operational": 3, 00:26:31.191 "base_bdevs_list": [ 00:26:31.191 { 00:26:31.191 "name": "BaseBdev1", 00:26:31.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:31.191 "is_configured": false, 00:26:31.191 "data_offset": 0, 00:26:31.191 "data_size": 0 00:26:31.191 }, 00:26:31.191 { 00:26:31.191 "name": "BaseBdev2", 00:26:31.191 "uuid": "a0e1a8b8-1dc3-4242-bae0-1469e4cd0a2e", 00:26:31.191 "is_configured": true, 00:26:31.191 "data_offset": 2048, 00:26:31.191 "data_size": 63488 00:26:31.191 }, 00:26:31.191 { 00:26:31.191 "name": "BaseBdev3", 00:26:31.191 "uuid": "c364080c-9345-4008-b02b-989ce9debd95", 00:26:31.191 "is_configured": true, 00:26:31.191 "data_offset": 2048, 00:26:31.191 "data_size": 63488 00:26:31.191 } 00:26:31.191 ] 00:26:31.191 }' 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:31.191 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:31.450 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:26:31.450 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.450 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:31.450 [2024-11-05 15:56:03.735337] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:31.450 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.450 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:26:31.450 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:31.450 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:31.450 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:31.450 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:31.450 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:31.450 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:31.450 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:31.450 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:31.450 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:31.450 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:31.450 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:31.450 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.450 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:31.450 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.450 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:31.450 "name": "Existed_Raid", 00:26:31.450 "uuid": "7cc362a4-0b94-45cb-8de6-6b40510af162", 00:26:31.450 "strip_size_kb": 64, 00:26:31.450 "state": "configuring", 00:26:31.450 "raid_level": "concat", 00:26:31.450 "superblock": true, 00:26:31.450 "num_base_bdevs": 3, 00:26:31.450 "num_base_bdevs_discovered": 1, 00:26:31.450 "num_base_bdevs_operational": 3, 00:26:31.450 "base_bdevs_list": [ 00:26:31.450 { 00:26:31.450 "name": "BaseBdev1", 00:26:31.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:31.450 "is_configured": false, 00:26:31.450 "data_offset": 0, 00:26:31.450 "data_size": 0 00:26:31.450 }, 00:26:31.450 { 00:26:31.450 "name": null, 00:26:31.450 "uuid": "a0e1a8b8-1dc3-4242-bae0-1469e4cd0a2e", 00:26:31.450 "is_configured": false, 00:26:31.450 "data_offset": 0, 00:26:31.450 "data_size": 63488 00:26:31.450 }, 00:26:31.450 { 00:26:31.450 "name": "BaseBdev3", 00:26:31.450 "uuid": "c364080c-9345-4008-b02b-989ce9debd95", 00:26:31.450 "is_configured": true, 00:26:31.450 "data_offset": 2048, 00:26:31.450 "data_size": 63488 00:26:31.450 } 00:26:31.450 ] 00:26:31.450 }' 00:26:31.450 15:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:31.450 15:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:31.708 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:31.708 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.708 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:31.708 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:31.708 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.708 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:26:31.708 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:31.708 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.708 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:31.708 [2024-11-05 15:56:04.077257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:31.708 BaseBdev1 00:26:31.708 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.708 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:26:31.708 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:26:31.708 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:31.708 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:26:31.709 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:31.709 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:31.709 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:31.709 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.709 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:31.709 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.709 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:31.709 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.709 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:31.709 [ 00:26:31.709 { 00:26:31.709 "name": "BaseBdev1", 00:26:31.709 "aliases": [ 00:26:31.709 "11b476ff-2b1f-4a89-a5e8-74eec496b91b" 00:26:31.709 ], 00:26:31.709 "product_name": "Malloc disk", 00:26:31.709 "block_size": 512, 00:26:31.709 "num_blocks": 65536, 00:26:31.709 "uuid": "11b476ff-2b1f-4a89-a5e8-74eec496b91b", 00:26:31.709 "assigned_rate_limits": { 00:26:31.709 "rw_ios_per_sec": 0, 00:26:31.709 "rw_mbytes_per_sec": 0, 00:26:31.709 "r_mbytes_per_sec": 0, 00:26:31.709 "w_mbytes_per_sec": 0 00:26:31.709 }, 00:26:31.709 "claimed": true, 00:26:31.709 "claim_type": "exclusive_write", 00:26:31.709 "zoned": false, 00:26:31.709 "supported_io_types": { 00:26:31.709 "read": true, 00:26:31.709 "write": true, 00:26:31.709 "unmap": true, 00:26:31.709 "flush": true, 00:26:31.709 "reset": true, 00:26:31.709 "nvme_admin": false, 00:26:31.709 "nvme_io": false, 00:26:31.709 "nvme_io_md": false, 00:26:31.709 "write_zeroes": true, 00:26:31.709 "zcopy": true, 00:26:31.709 "get_zone_info": false, 00:26:31.709 "zone_management": false, 00:26:31.709 "zone_append": false, 00:26:31.709 "compare": false, 00:26:31.709 "compare_and_write": false, 00:26:31.709 "abort": true, 00:26:31.709 "seek_hole": false, 00:26:31.709 "seek_data": false, 00:26:31.709 "copy": true, 00:26:31.709 "nvme_iov_md": false 00:26:31.709 }, 00:26:31.709 "memory_domains": [ 00:26:31.709 { 00:26:31.709 "dma_device_id": "system", 00:26:31.709 "dma_device_type": 1 00:26:31.709 }, 00:26:31.709 { 00:26:31.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:31.709 "dma_device_type": 2 00:26:31.709 } 00:26:31.709 ], 00:26:31.709 "driver_specific": {} 00:26:31.709 } 00:26:31.709 ] 00:26:31.709 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.709 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:26:31.709 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:26:31.709 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:31.709 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:31.709 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:31.709 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:31.709 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:31.709 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:31.709 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:31.709 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:31.709 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:31.709 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:31.709 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:31.709 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.709 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:31.709 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.968 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:31.968 "name": "Existed_Raid", 00:26:31.968 "uuid": "7cc362a4-0b94-45cb-8de6-6b40510af162", 00:26:31.968 "strip_size_kb": 64, 00:26:31.968 "state": "configuring", 00:26:31.968 "raid_level": "concat", 00:26:31.968 "superblock": true, 00:26:31.968 "num_base_bdevs": 3, 00:26:31.968 "num_base_bdevs_discovered": 2, 00:26:31.968 "num_base_bdevs_operational": 3, 00:26:31.968 "base_bdevs_list": [ 00:26:31.968 { 00:26:31.968 "name": "BaseBdev1", 00:26:31.968 "uuid": "11b476ff-2b1f-4a89-a5e8-74eec496b91b", 00:26:31.968 "is_configured": true, 00:26:31.968 "data_offset": 2048, 00:26:31.968 "data_size": 63488 00:26:31.968 }, 00:26:31.968 { 00:26:31.968 "name": null, 00:26:31.968 "uuid": "a0e1a8b8-1dc3-4242-bae0-1469e4cd0a2e", 00:26:31.968 "is_configured": false, 00:26:31.968 "data_offset": 0, 00:26:31.968 "data_size": 63488 00:26:31.968 }, 00:26:31.968 { 00:26:31.968 "name": "BaseBdev3", 00:26:31.968 "uuid": "c364080c-9345-4008-b02b-989ce9debd95", 00:26:31.968 "is_configured": true, 00:26:31.968 "data_offset": 2048, 00:26:31.968 "data_size": 63488 00:26:31.968 } 00:26:31.968 ] 00:26:31.968 }' 00:26:31.968 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:31.968 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:32.225 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:32.225 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:32.225 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.225 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:32.225 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.225 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:26:32.225 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:26:32.226 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.226 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:32.226 [2024-11-05 15:56:04.465373] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:32.226 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.226 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:26:32.226 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:32.226 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:32.226 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:32.226 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:32.226 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:32.226 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:32.226 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:32.226 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:32.226 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:32.226 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:32.226 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.226 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:32.226 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:32.226 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.226 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:32.226 "name": "Existed_Raid", 00:26:32.226 "uuid": "7cc362a4-0b94-45cb-8de6-6b40510af162", 00:26:32.226 "strip_size_kb": 64, 00:26:32.226 "state": "configuring", 00:26:32.226 "raid_level": "concat", 00:26:32.226 "superblock": true, 00:26:32.226 "num_base_bdevs": 3, 00:26:32.226 "num_base_bdevs_discovered": 1, 00:26:32.226 "num_base_bdevs_operational": 3, 00:26:32.226 "base_bdevs_list": [ 00:26:32.226 { 00:26:32.226 "name": "BaseBdev1", 00:26:32.226 "uuid": "11b476ff-2b1f-4a89-a5e8-74eec496b91b", 00:26:32.226 "is_configured": true, 00:26:32.226 "data_offset": 2048, 00:26:32.226 "data_size": 63488 00:26:32.226 }, 00:26:32.226 { 00:26:32.226 "name": null, 00:26:32.226 "uuid": "a0e1a8b8-1dc3-4242-bae0-1469e4cd0a2e", 00:26:32.226 "is_configured": false, 00:26:32.226 "data_offset": 0, 00:26:32.226 "data_size": 63488 00:26:32.226 }, 00:26:32.226 { 00:26:32.226 "name": null, 00:26:32.226 "uuid": "c364080c-9345-4008-b02b-989ce9debd95", 00:26:32.226 "is_configured": false, 00:26:32.226 "data_offset": 0, 00:26:32.226 "data_size": 63488 00:26:32.226 } 00:26:32.226 ] 00:26:32.226 }' 00:26:32.226 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:32.226 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:32.483 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:32.483 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:32.483 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.483 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:32.483 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.483 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:26:32.483 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:26:32.483 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.483 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:32.483 [2024-11-05 15:56:04.821463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:32.483 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.483 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:26:32.483 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:32.483 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:32.483 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:32.483 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:32.484 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:32.484 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:32.484 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:32.484 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:32.484 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:32.484 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:32.484 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:32.484 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.484 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:32.484 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.484 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:32.484 "name": "Existed_Raid", 00:26:32.484 "uuid": "7cc362a4-0b94-45cb-8de6-6b40510af162", 00:26:32.484 "strip_size_kb": 64, 00:26:32.484 "state": "configuring", 00:26:32.484 "raid_level": "concat", 00:26:32.484 "superblock": true, 00:26:32.484 "num_base_bdevs": 3, 00:26:32.484 "num_base_bdevs_discovered": 2, 00:26:32.484 "num_base_bdevs_operational": 3, 00:26:32.484 "base_bdevs_list": [ 00:26:32.484 { 00:26:32.484 "name": "BaseBdev1", 00:26:32.484 "uuid": "11b476ff-2b1f-4a89-a5e8-74eec496b91b", 00:26:32.484 "is_configured": true, 00:26:32.484 "data_offset": 2048, 00:26:32.484 "data_size": 63488 00:26:32.484 }, 00:26:32.484 { 00:26:32.484 "name": null, 00:26:32.484 "uuid": "a0e1a8b8-1dc3-4242-bae0-1469e4cd0a2e", 00:26:32.484 "is_configured": false, 00:26:32.484 "data_offset": 0, 00:26:32.484 "data_size": 63488 00:26:32.484 }, 00:26:32.484 { 00:26:32.484 "name": "BaseBdev3", 00:26:32.484 "uuid": "c364080c-9345-4008-b02b-989ce9debd95", 00:26:32.484 "is_configured": true, 00:26:32.484 "data_offset": 2048, 00:26:32.484 "data_size": 63488 00:26:32.484 } 00:26:32.484 ] 00:26:32.484 }' 00:26:32.484 15:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:32.484 15:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:32.742 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:32.742 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:32.742 15:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.742 15:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:32.742 15:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.007 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:26:33.007 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:33.007 15:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.007 15:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:33.007 [2024-11-05 15:56:05.169543] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:33.007 15:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.007 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:26:33.007 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:33.007 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:33.007 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:33.007 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:33.007 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:33.007 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:33.007 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:33.007 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:33.007 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:33.007 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:33.007 15:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.007 15:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:33.007 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:33.007 15:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.007 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:33.007 "name": "Existed_Raid", 00:26:33.007 "uuid": "7cc362a4-0b94-45cb-8de6-6b40510af162", 00:26:33.007 "strip_size_kb": 64, 00:26:33.007 "state": "configuring", 00:26:33.007 "raid_level": "concat", 00:26:33.007 "superblock": true, 00:26:33.007 "num_base_bdevs": 3, 00:26:33.007 "num_base_bdevs_discovered": 1, 00:26:33.007 "num_base_bdevs_operational": 3, 00:26:33.007 "base_bdevs_list": [ 00:26:33.007 { 00:26:33.007 "name": null, 00:26:33.007 "uuid": "11b476ff-2b1f-4a89-a5e8-74eec496b91b", 00:26:33.007 "is_configured": false, 00:26:33.007 "data_offset": 0, 00:26:33.007 "data_size": 63488 00:26:33.007 }, 00:26:33.007 { 00:26:33.007 "name": null, 00:26:33.007 "uuid": "a0e1a8b8-1dc3-4242-bae0-1469e4cd0a2e", 00:26:33.007 "is_configured": false, 00:26:33.007 "data_offset": 0, 00:26:33.007 "data_size": 63488 00:26:33.007 }, 00:26:33.007 { 00:26:33.007 "name": "BaseBdev3", 00:26:33.007 "uuid": "c364080c-9345-4008-b02b-989ce9debd95", 00:26:33.007 "is_configured": true, 00:26:33.007 "data_offset": 2048, 00:26:33.007 "data_size": 63488 00:26:33.007 } 00:26:33.007 ] 00:26:33.007 }' 00:26:33.007 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:33.007 15:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:33.268 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:33.268 15:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.268 15:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:33.268 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:33.268 15:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.268 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:26:33.268 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:33.268 15:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.268 15:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:33.268 [2024-11-05 15:56:05.578527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:33.268 15:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.268 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:26:33.268 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:33.268 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:33.268 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:33.268 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:33.268 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:33.268 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:33.268 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:33.268 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:33.268 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:33.268 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:33.268 15:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.268 15:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:33.268 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:33.268 15:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.268 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:33.268 "name": "Existed_Raid", 00:26:33.268 "uuid": "7cc362a4-0b94-45cb-8de6-6b40510af162", 00:26:33.268 "strip_size_kb": 64, 00:26:33.268 "state": "configuring", 00:26:33.268 "raid_level": "concat", 00:26:33.268 "superblock": true, 00:26:33.268 "num_base_bdevs": 3, 00:26:33.268 "num_base_bdevs_discovered": 2, 00:26:33.268 "num_base_bdevs_operational": 3, 00:26:33.268 "base_bdevs_list": [ 00:26:33.268 { 00:26:33.268 "name": null, 00:26:33.268 "uuid": "11b476ff-2b1f-4a89-a5e8-74eec496b91b", 00:26:33.268 "is_configured": false, 00:26:33.268 "data_offset": 0, 00:26:33.268 "data_size": 63488 00:26:33.268 }, 00:26:33.268 { 00:26:33.268 "name": "BaseBdev2", 00:26:33.268 "uuid": "a0e1a8b8-1dc3-4242-bae0-1469e4cd0a2e", 00:26:33.268 "is_configured": true, 00:26:33.268 "data_offset": 2048, 00:26:33.268 "data_size": 63488 00:26:33.268 }, 00:26:33.268 { 00:26:33.268 "name": "BaseBdev3", 00:26:33.268 "uuid": "c364080c-9345-4008-b02b-989ce9debd95", 00:26:33.268 "is_configured": true, 00:26:33.268 "data_offset": 2048, 00:26:33.268 "data_size": 63488 00:26:33.268 } 00:26:33.268 ] 00:26:33.268 }' 00:26:33.268 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:33.268 15:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:33.526 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:33.526 15:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.526 15:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:33.526 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:33.526 15:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.526 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:26:33.526 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:33.526 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:33.526 15:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.526 15:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:33.784 15:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.784 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 11b476ff-2b1f-4a89-a5e8-74eec496b91b 00:26:33.784 15:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.784 15:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:33.784 [2024-11-05 15:56:05.993116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:33.784 [2024-11-05 15:56:05.993262] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:26:33.784 [2024-11-05 15:56:05.993274] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:33.784 NewBaseBdev 00:26:33.784 [2024-11-05 15:56:05.993464] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:26:33.784 [2024-11-05 15:56:05.993560] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:26:33.784 [2024-11-05 15:56:05.993566] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:26:33.784 [2024-11-05 15:56:05.993659] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:33.784 15:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.784 15:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:26:33.784 15:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:26:33.784 15:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:33.784 15:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:26:33.784 15:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:33.784 15:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:33.784 15:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:33.784 15:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.784 15:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:33.784 15:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.784 15:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:33.784 15:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.784 15:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:33.784 [ 00:26:33.784 { 00:26:33.784 "name": "NewBaseBdev", 00:26:33.784 "aliases": [ 00:26:33.784 "11b476ff-2b1f-4a89-a5e8-74eec496b91b" 00:26:33.784 ], 00:26:33.784 "product_name": "Malloc disk", 00:26:33.784 "block_size": 512, 00:26:33.784 "num_blocks": 65536, 00:26:33.784 "uuid": "11b476ff-2b1f-4a89-a5e8-74eec496b91b", 00:26:33.784 "assigned_rate_limits": { 00:26:33.784 "rw_ios_per_sec": 0, 00:26:33.784 "rw_mbytes_per_sec": 0, 00:26:33.784 "r_mbytes_per_sec": 0, 00:26:33.784 "w_mbytes_per_sec": 0 00:26:33.784 }, 00:26:33.784 "claimed": true, 00:26:33.784 "claim_type": "exclusive_write", 00:26:33.784 "zoned": false, 00:26:33.784 "supported_io_types": { 00:26:33.784 "read": true, 00:26:33.784 "write": true, 00:26:33.784 "unmap": true, 00:26:33.784 "flush": true, 00:26:33.784 "reset": true, 00:26:33.784 "nvme_admin": false, 00:26:33.784 "nvme_io": false, 00:26:33.784 "nvme_io_md": false, 00:26:33.784 "write_zeroes": true, 00:26:33.784 "zcopy": true, 00:26:33.784 "get_zone_info": false, 00:26:33.784 "zone_management": false, 00:26:33.784 "zone_append": false, 00:26:33.784 "compare": false, 00:26:33.784 "compare_and_write": false, 00:26:33.784 "abort": true, 00:26:33.784 "seek_hole": false, 00:26:33.784 "seek_data": false, 00:26:33.784 "copy": true, 00:26:33.784 "nvme_iov_md": false 00:26:33.784 }, 00:26:33.784 "memory_domains": [ 00:26:33.784 { 00:26:33.784 "dma_device_id": "system", 00:26:33.784 "dma_device_type": 1 00:26:33.784 }, 00:26:33.784 { 00:26:33.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:33.784 "dma_device_type": 2 00:26:33.784 } 00:26:33.784 ], 00:26:33.784 "driver_specific": {} 00:26:33.784 } 00:26:33.784 ] 00:26:33.784 15:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.784 15:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:26:33.784 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:26:33.784 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:33.784 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:33.784 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:33.784 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:33.784 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:33.784 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:33.784 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:33.784 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:33.784 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:33.784 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:33.784 15:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.784 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:33.784 15:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:33.784 15:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.784 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:33.784 "name": "Existed_Raid", 00:26:33.784 "uuid": "7cc362a4-0b94-45cb-8de6-6b40510af162", 00:26:33.784 "strip_size_kb": 64, 00:26:33.784 "state": "online", 00:26:33.784 "raid_level": "concat", 00:26:33.784 "superblock": true, 00:26:33.784 "num_base_bdevs": 3, 00:26:33.784 "num_base_bdevs_discovered": 3, 00:26:33.784 "num_base_bdevs_operational": 3, 00:26:33.784 "base_bdevs_list": [ 00:26:33.784 { 00:26:33.784 "name": "NewBaseBdev", 00:26:33.784 "uuid": "11b476ff-2b1f-4a89-a5e8-74eec496b91b", 00:26:33.784 "is_configured": true, 00:26:33.784 "data_offset": 2048, 00:26:33.784 "data_size": 63488 00:26:33.784 }, 00:26:33.784 { 00:26:33.784 "name": "BaseBdev2", 00:26:33.784 "uuid": "a0e1a8b8-1dc3-4242-bae0-1469e4cd0a2e", 00:26:33.784 "is_configured": true, 00:26:33.784 "data_offset": 2048, 00:26:33.784 "data_size": 63488 00:26:33.784 }, 00:26:33.784 { 00:26:33.784 "name": "BaseBdev3", 00:26:33.784 "uuid": "c364080c-9345-4008-b02b-989ce9debd95", 00:26:33.784 "is_configured": true, 00:26:33.784 "data_offset": 2048, 00:26:33.784 "data_size": 63488 00:26:33.784 } 00:26:33.784 ] 00:26:33.784 }' 00:26:33.784 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:33.784 15:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:34.042 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:26:34.042 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:34.042 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:34.042 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:34.042 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:26:34.042 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:34.042 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:34.042 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:34.042 15:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.042 15:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:34.042 [2024-11-05 15:56:06.345478] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:34.042 15:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.042 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:34.042 "name": "Existed_Raid", 00:26:34.042 "aliases": [ 00:26:34.042 "7cc362a4-0b94-45cb-8de6-6b40510af162" 00:26:34.042 ], 00:26:34.042 "product_name": "Raid Volume", 00:26:34.042 "block_size": 512, 00:26:34.042 "num_blocks": 190464, 00:26:34.042 "uuid": "7cc362a4-0b94-45cb-8de6-6b40510af162", 00:26:34.042 "assigned_rate_limits": { 00:26:34.042 "rw_ios_per_sec": 0, 00:26:34.042 "rw_mbytes_per_sec": 0, 00:26:34.042 "r_mbytes_per_sec": 0, 00:26:34.042 "w_mbytes_per_sec": 0 00:26:34.042 }, 00:26:34.042 "claimed": false, 00:26:34.042 "zoned": false, 00:26:34.042 "supported_io_types": { 00:26:34.042 "read": true, 00:26:34.042 "write": true, 00:26:34.042 "unmap": true, 00:26:34.042 "flush": true, 00:26:34.042 "reset": true, 00:26:34.042 "nvme_admin": false, 00:26:34.042 "nvme_io": false, 00:26:34.042 "nvme_io_md": false, 00:26:34.042 "write_zeroes": true, 00:26:34.042 "zcopy": false, 00:26:34.042 "get_zone_info": false, 00:26:34.042 "zone_management": false, 00:26:34.042 "zone_append": false, 00:26:34.042 "compare": false, 00:26:34.042 "compare_and_write": false, 00:26:34.042 "abort": false, 00:26:34.042 "seek_hole": false, 00:26:34.042 "seek_data": false, 00:26:34.042 "copy": false, 00:26:34.042 "nvme_iov_md": false 00:26:34.042 }, 00:26:34.042 "memory_domains": [ 00:26:34.042 { 00:26:34.042 "dma_device_id": "system", 00:26:34.042 "dma_device_type": 1 00:26:34.042 }, 00:26:34.042 { 00:26:34.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:34.042 "dma_device_type": 2 00:26:34.042 }, 00:26:34.042 { 00:26:34.042 "dma_device_id": "system", 00:26:34.042 "dma_device_type": 1 00:26:34.042 }, 00:26:34.042 { 00:26:34.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:34.042 "dma_device_type": 2 00:26:34.042 }, 00:26:34.042 { 00:26:34.042 "dma_device_id": "system", 00:26:34.042 "dma_device_type": 1 00:26:34.042 }, 00:26:34.042 { 00:26:34.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:34.042 "dma_device_type": 2 00:26:34.042 } 00:26:34.042 ], 00:26:34.042 "driver_specific": { 00:26:34.042 "raid": { 00:26:34.042 "uuid": "7cc362a4-0b94-45cb-8de6-6b40510af162", 00:26:34.042 "strip_size_kb": 64, 00:26:34.042 "state": "online", 00:26:34.042 "raid_level": "concat", 00:26:34.042 "superblock": true, 00:26:34.042 "num_base_bdevs": 3, 00:26:34.042 "num_base_bdevs_discovered": 3, 00:26:34.042 "num_base_bdevs_operational": 3, 00:26:34.042 "base_bdevs_list": [ 00:26:34.042 { 00:26:34.042 "name": "NewBaseBdev", 00:26:34.042 "uuid": "11b476ff-2b1f-4a89-a5e8-74eec496b91b", 00:26:34.042 "is_configured": true, 00:26:34.042 "data_offset": 2048, 00:26:34.042 "data_size": 63488 00:26:34.042 }, 00:26:34.042 { 00:26:34.042 "name": "BaseBdev2", 00:26:34.042 "uuid": "a0e1a8b8-1dc3-4242-bae0-1469e4cd0a2e", 00:26:34.042 "is_configured": true, 00:26:34.042 "data_offset": 2048, 00:26:34.042 "data_size": 63488 00:26:34.042 }, 00:26:34.042 { 00:26:34.042 "name": "BaseBdev3", 00:26:34.042 "uuid": "c364080c-9345-4008-b02b-989ce9debd95", 00:26:34.042 "is_configured": true, 00:26:34.042 "data_offset": 2048, 00:26:34.042 "data_size": 63488 00:26:34.042 } 00:26:34.042 ] 00:26:34.042 } 00:26:34.043 } 00:26:34.043 }' 00:26:34.043 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:34.043 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:26:34.043 BaseBdev2 00:26:34.043 BaseBdev3' 00:26:34.043 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:34.043 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:34.043 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:34.043 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:34.043 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:26:34.043 15:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.043 15:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:34.043 15:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.043 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:34.043 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:34.043 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:34.043 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:34.043 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:34.043 15:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.043 15:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:34.300 15:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.300 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:34.300 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:34.300 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:34.300 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:26:34.300 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:34.300 15:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.300 15:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:34.300 15:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.300 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:34.300 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:34.300 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:34.300 15:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.300 15:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:34.300 [2024-11-05 15:56:06.509232] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:34.300 [2024-11-05 15:56:06.509253] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:34.300 [2024-11-05 15:56:06.509306] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:34.300 [2024-11-05 15:56:06.509351] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:34.300 [2024-11-05 15:56:06.509361] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:26:34.300 15:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.300 15:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64507 00:26:34.300 15:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 64507 ']' 00:26:34.300 15:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 64507 00:26:34.300 15:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:26:34.300 15:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:34.300 15:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64507 00:26:34.300 killing process with pid 64507 00:26:34.300 15:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:34.300 15:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:34.300 15:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64507' 00:26:34.300 15:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 64507 00:26:34.300 [2024-11-05 15:56:06.537162] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:34.300 15:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 64507 00:26:34.300 [2024-11-05 15:56:06.682621] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:34.875 ************************************ 00:26:34.875 END TEST raid_state_function_test_sb 00:26:34.875 ************************************ 00:26:34.875 15:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:26:34.875 00:26:34.875 real 0m7.383s 00:26:34.875 user 0m11.989s 00:26:34.875 sys 0m1.164s 00:26:34.875 15:56:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:34.875 15:56:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:34.875 15:56:07 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:26:34.875 15:56:07 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:26:34.875 15:56:07 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:34.875 15:56:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:34.875 ************************************ 00:26:34.875 START TEST raid_superblock_test 00:26:34.875 ************************************ 00:26:34.875 15:56:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 3 00:26:34.875 15:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:26:34.875 15:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:26:34.875 15:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:26:34.875 15:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:26:34.875 15:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:26:34.875 15:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:26:34.875 15:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:26:34.875 15:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:26:35.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:35.159 15:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:26:35.159 15:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:26:35.159 15:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:26:35.159 15:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:26:35.159 15:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:26:35.159 15:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:26:35.159 15:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:26:35.159 15:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:26:35.159 15:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65100 00:26:35.159 15:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65100 00:26:35.159 15:56:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 65100 ']' 00:26:35.159 15:56:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:35.159 15:56:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:35.159 15:56:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:35.159 15:56:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:35.159 15:56:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.159 15:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:26:35.159 [2024-11-05 15:56:07.342932] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:26:35.159 [2024-11-05 15:56:07.343193] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65100 ] 00:26:35.159 [2024-11-05 15:56:07.499049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.418 [2024-11-05 15:56:07.581167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:35.418 [2024-11-05 15:56:07.689780] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:35.418 [2024-11-05 15:56:07.689949] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:35.984 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:35.984 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:26:35.984 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:26:35.984 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:35.984 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:26:35.984 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:26:35.984 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:35.984 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:35.984 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:35.984 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:35.984 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:26:35.984 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.984 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.984 malloc1 00:26:35.984 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.984 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:35.984 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.984 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.984 [2024-11-05 15:56:08.220176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:35.984 [2024-11-05 15:56:08.220227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:35.984 [2024-11-05 15:56:08.220244] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:26:35.984 [2024-11-05 15:56:08.220251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:35.984 [2024-11-05 15:56:08.221999] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:35.984 [2024-11-05 15:56:08.222028] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:35.984 pt1 00:26:35.984 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.984 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:35.984 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:35.984 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:26:35.984 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:26:35.984 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:35.984 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:35.984 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:35.984 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:35.984 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:26:35.984 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.984 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.984 malloc2 00:26:35.984 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.984 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:35.984 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.984 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.984 [2024-11-05 15:56:08.255585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:35.985 [2024-11-05 15:56:08.255730] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:35.985 [2024-11-05 15:56:08.255752] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:26:35.985 [2024-11-05 15:56:08.255758] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:35.985 [2024-11-05 15:56:08.257442] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:35.985 [2024-11-05 15:56:08.257466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:35.985 pt2 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.985 malloc3 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.985 [2024-11-05 15:56:08.300950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:35.985 [2024-11-05 15:56:08.300994] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:35.985 [2024-11-05 15:56:08.301010] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:26:35.985 [2024-11-05 15:56:08.301017] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:35.985 [2024-11-05 15:56:08.302731] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:35.985 [2024-11-05 15:56:08.302762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:35.985 pt3 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.985 [2024-11-05 15:56:08.312994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:35.985 [2024-11-05 15:56:08.314487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:35.985 [2024-11-05 15:56:08.314538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:35.985 [2024-11-05 15:56:08.314661] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:26:35.985 [2024-11-05 15:56:08.314671] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:35.985 [2024-11-05 15:56:08.314888] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:26:35.985 [2024-11-05 15:56:08.315001] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:26:35.985 [2024-11-05 15:56:08.315008] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:26:35.985 [2024-11-05 15:56:08.315118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:35.985 "name": "raid_bdev1", 00:26:35.985 "uuid": "bb4ca188-cb0f-4fa0-9451-7ed5ec7070a7", 00:26:35.985 "strip_size_kb": 64, 00:26:35.985 "state": "online", 00:26:35.985 "raid_level": "concat", 00:26:35.985 "superblock": true, 00:26:35.985 "num_base_bdevs": 3, 00:26:35.985 "num_base_bdevs_discovered": 3, 00:26:35.985 "num_base_bdevs_operational": 3, 00:26:35.985 "base_bdevs_list": [ 00:26:35.985 { 00:26:35.985 "name": "pt1", 00:26:35.985 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:35.985 "is_configured": true, 00:26:35.985 "data_offset": 2048, 00:26:35.985 "data_size": 63488 00:26:35.985 }, 00:26:35.985 { 00:26:35.985 "name": "pt2", 00:26:35.985 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:35.985 "is_configured": true, 00:26:35.985 "data_offset": 2048, 00:26:35.985 "data_size": 63488 00:26:35.985 }, 00:26:35.985 { 00:26:35.985 "name": "pt3", 00:26:35.985 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:35.985 "is_configured": true, 00:26:35.985 "data_offset": 2048, 00:26:35.985 "data_size": 63488 00:26:35.985 } 00:26:35.985 ] 00:26:35.985 }' 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:35.985 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.243 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:26:36.243 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:26:36.243 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:36.243 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:36.243 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:36.243 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:36.243 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:36.243 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:36.243 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.243 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.243 [2024-11-05 15:56:08.629279] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:36.243 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.243 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:36.243 "name": "raid_bdev1", 00:26:36.243 "aliases": [ 00:26:36.243 "bb4ca188-cb0f-4fa0-9451-7ed5ec7070a7" 00:26:36.243 ], 00:26:36.243 "product_name": "Raid Volume", 00:26:36.243 "block_size": 512, 00:26:36.243 "num_blocks": 190464, 00:26:36.243 "uuid": "bb4ca188-cb0f-4fa0-9451-7ed5ec7070a7", 00:26:36.243 "assigned_rate_limits": { 00:26:36.243 "rw_ios_per_sec": 0, 00:26:36.243 "rw_mbytes_per_sec": 0, 00:26:36.243 "r_mbytes_per_sec": 0, 00:26:36.243 "w_mbytes_per_sec": 0 00:26:36.243 }, 00:26:36.243 "claimed": false, 00:26:36.243 "zoned": false, 00:26:36.243 "supported_io_types": { 00:26:36.243 "read": true, 00:26:36.243 "write": true, 00:26:36.243 "unmap": true, 00:26:36.243 "flush": true, 00:26:36.243 "reset": true, 00:26:36.243 "nvme_admin": false, 00:26:36.243 "nvme_io": false, 00:26:36.243 "nvme_io_md": false, 00:26:36.243 "write_zeroes": true, 00:26:36.243 "zcopy": false, 00:26:36.243 "get_zone_info": false, 00:26:36.243 "zone_management": false, 00:26:36.243 "zone_append": false, 00:26:36.243 "compare": false, 00:26:36.243 "compare_and_write": false, 00:26:36.243 "abort": false, 00:26:36.243 "seek_hole": false, 00:26:36.243 "seek_data": false, 00:26:36.243 "copy": false, 00:26:36.243 "nvme_iov_md": false 00:26:36.243 }, 00:26:36.243 "memory_domains": [ 00:26:36.243 { 00:26:36.243 "dma_device_id": "system", 00:26:36.243 "dma_device_type": 1 00:26:36.243 }, 00:26:36.243 { 00:26:36.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:36.243 "dma_device_type": 2 00:26:36.243 }, 00:26:36.243 { 00:26:36.243 "dma_device_id": "system", 00:26:36.244 "dma_device_type": 1 00:26:36.244 }, 00:26:36.244 { 00:26:36.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:36.244 "dma_device_type": 2 00:26:36.244 }, 00:26:36.244 { 00:26:36.244 "dma_device_id": "system", 00:26:36.244 "dma_device_type": 1 00:26:36.244 }, 00:26:36.244 { 00:26:36.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:36.244 "dma_device_type": 2 00:26:36.244 } 00:26:36.244 ], 00:26:36.244 "driver_specific": { 00:26:36.244 "raid": { 00:26:36.244 "uuid": "bb4ca188-cb0f-4fa0-9451-7ed5ec7070a7", 00:26:36.244 "strip_size_kb": 64, 00:26:36.244 "state": "online", 00:26:36.244 "raid_level": "concat", 00:26:36.244 "superblock": true, 00:26:36.244 "num_base_bdevs": 3, 00:26:36.244 "num_base_bdevs_discovered": 3, 00:26:36.244 "num_base_bdevs_operational": 3, 00:26:36.244 "base_bdevs_list": [ 00:26:36.244 { 00:26:36.244 "name": "pt1", 00:26:36.244 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:36.244 "is_configured": true, 00:26:36.244 "data_offset": 2048, 00:26:36.244 "data_size": 63488 00:26:36.244 }, 00:26:36.244 { 00:26:36.244 "name": "pt2", 00:26:36.244 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:36.244 "is_configured": true, 00:26:36.244 "data_offset": 2048, 00:26:36.244 "data_size": 63488 00:26:36.244 }, 00:26:36.244 { 00:26:36.244 "name": "pt3", 00:26:36.244 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:36.244 "is_configured": true, 00:26:36.244 "data_offset": 2048, 00:26:36.244 "data_size": 63488 00:26:36.244 } 00:26:36.244 ] 00:26:36.244 } 00:26:36.244 } 00:26:36.244 }' 00:26:36.244 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:36.502 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:26:36.502 pt2 00:26:36.502 pt3' 00:26:36.502 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:36.502 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:36.502 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:36.502 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:26:36.502 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:36.502 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.502 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.502 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.502 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:36.502 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:36.502 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:36.502 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:26:36.502 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.502 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.502 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:36.502 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.502 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:36.502 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:36.502 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:36.502 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:26:36.502 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:36.502 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.502 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.502 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.503 [2024-11-05 15:56:08.825287] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bb4ca188-cb0f-4fa0-9451-7ed5ec7070a7 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z bb4ca188-cb0f-4fa0-9451-7ed5ec7070a7 ']' 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.503 [2024-11-05 15:56:08.849051] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:36.503 [2024-11-05 15:56:08.849072] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:36.503 [2024-11-05 15:56:08.849132] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:36.503 [2024-11-05 15:56:08.849183] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:36.503 [2024-11-05 15:56:08.849191] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.503 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.761 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.761 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:26:36.761 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:26:36.761 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:26:36.761 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:26:36.761 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:36.761 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:36.761 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:36.761 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:36.761 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:26:36.761 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.761 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.761 [2024-11-05 15:56:08.953122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:36.761 [2024-11-05 15:56:08.954702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:36.761 [2024-11-05 15:56:08.954744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:26:36.761 [2024-11-05 15:56:08.954784] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:26:36.761 [2024-11-05 15:56:08.954828] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:26:36.761 [2024-11-05 15:56:08.954859] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:26:36.761 [2024-11-05 15:56:08.954874] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:36.761 [2024-11-05 15:56:08.954882] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:26:36.761 request: 00:26:36.761 { 00:26:36.761 "name": "raid_bdev1", 00:26:36.761 "raid_level": "concat", 00:26:36.761 "base_bdevs": [ 00:26:36.761 "malloc1", 00:26:36.761 "malloc2", 00:26:36.761 "malloc3" 00:26:36.761 ], 00:26:36.761 "strip_size_kb": 64, 00:26:36.761 "superblock": false, 00:26:36.761 "method": "bdev_raid_create", 00:26:36.761 "req_id": 1 00:26:36.761 } 00:26:36.761 Got JSON-RPC error response 00:26:36.761 response: 00:26:36.761 { 00:26:36.761 "code": -17, 00:26:36.761 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:36.761 } 00:26:36.761 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:36.761 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:26:36.761 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:36.761 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:36.761 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:36.761 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:26:36.761 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:36.761 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.761 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.761 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.761 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:26:36.761 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:26:36.761 15:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:36.761 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.761 15:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.761 [2024-11-05 15:56:08.997084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:36.761 [2024-11-05 15:56:08.997132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:36.761 [2024-11-05 15:56:08.997148] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:26:36.761 [2024-11-05 15:56:08.997155] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:36.761 [2024-11-05 15:56:08.998963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:36.761 [2024-11-05 15:56:08.999082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:36.761 [2024-11-05 15:56:08.999159] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:36.761 [2024-11-05 15:56:08.999205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:36.761 pt1 00:26:36.761 15:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.761 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:26:36.761 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:36.761 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:36.761 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:36.761 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:36.761 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:36.761 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:36.761 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:36.761 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:36.761 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:36.761 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:36.761 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:36.761 15:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.761 15:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.761 15:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.761 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:36.761 "name": "raid_bdev1", 00:26:36.761 "uuid": "bb4ca188-cb0f-4fa0-9451-7ed5ec7070a7", 00:26:36.761 "strip_size_kb": 64, 00:26:36.761 "state": "configuring", 00:26:36.761 "raid_level": "concat", 00:26:36.761 "superblock": true, 00:26:36.761 "num_base_bdevs": 3, 00:26:36.761 "num_base_bdevs_discovered": 1, 00:26:36.761 "num_base_bdevs_operational": 3, 00:26:36.761 "base_bdevs_list": [ 00:26:36.761 { 00:26:36.761 "name": "pt1", 00:26:36.761 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:36.761 "is_configured": true, 00:26:36.761 "data_offset": 2048, 00:26:36.761 "data_size": 63488 00:26:36.761 }, 00:26:36.761 { 00:26:36.761 "name": null, 00:26:36.761 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:36.761 "is_configured": false, 00:26:36.761 "data_offset": 2048, 00:26:36.761 "data_size": 63488 00:26:36.761 }, 00:26:36.761 { 00:26:36.761 "name": null, 00:26:36.761 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:36.761 "is_configured": false, 00:26:36.762 "data_offset": 2048, 00:26:36.762 "data_size": 63488 00:26:36.762 } 00:26:36.762 ] 00:26:36.762 }' 00:26:36.762 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:36.762 15:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.021 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:26:37.022 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:37.022 15:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.022 15:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.022 [2024-11-05 15:56:09.313149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:37.022 [2024-11-05 15:56:09.313199] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:37.022 [2024-11-05 15:56:09.313215] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:26:37.022 [2024-11-05 15:56:09.313222] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:37.022 [2024-11-05 15:56:09.313559] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:37.022 [2024-11-05 15:56:09.313570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:37.022 [2024-11-05 15:56:09.313629] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:37.022 [2024-11-05 15:56:09.313644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:37.022 pt2 00:26:37.022 15:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.022 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:26:37.022 15:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.022 15:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.022 [2024-11-05 15:56:09.321143] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:26:37.022 15:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.022 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:26:37.022 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:37.022 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:37.022 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:37.022 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:37.022 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:37.022 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:37.022 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:37.022 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:37.022 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:37.022 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:37.022 15:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.022 15:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.022 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:37.022 15:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.022 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:37.022 "name": "raid_bdev1", 00:26:37.022 "uuid": "bb4ca188-cb0f-4fa0-9451-7ed5ec7070a7", 00:26:37.022 "strip_size_kb": 64, 00:26:37.022 "state": "configuring", 00:26:37.022 "raid_level": "concat", 00:26:37.022 "superblock": true, 00:26:37.022 "num_base_bdevs": 3, 00:26:37.022 "num_base_bdevs_discovered": 1, 00:26:37.022 "num_base_bdevs_operational": 3, 00:26:37.022 "base_bdevs_list": [ 00:26:37.022 { 00:26:37.022 "name": "pt1", 00:26:37.022 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:37.022 "is_configured": true, 00:26:37.022 "data_offset": 2048, 00:26:37.022 "data_size": 63488 00:26:37.022 }, 00:26:37.022 { 00:26:37.022 "name": null, 00:26:37.022 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:37.022 "is_configured": false, 00:26:37.022 "data_offset": 0, 00:26:37.022 "data_size": 63488 00:26:37.022 }, 00:26:37.022 { 00:26:37.022 "name": null, 00:26:37.022 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:37.022 "is_configured": false, 00:26:37.022 "data_offset": 2048, 00:26:37.022 "data_size": 63488 00:26:37.022 } 00:26:37.022 ] 00:26:37.022 }' 00:26:37.022 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:37.022 15:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.280 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:26:37.280 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:37.280 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:37.280 15:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.280 15:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.280 [2024-11-05 15:56:09.653207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:37.280 [2024-11-05 15:56:09.653365] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:37.280 [2024-11-05 15:56:09.653383] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:26:37.280 [2024-11-05 15:56:09.653392] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:37.280 [2024-11-05 15:56:09.653740] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:37.280 [2024-11-05 15:56:09.653760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:37.280 [2024-11-05 15:56:09.653821] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:37.280 [2024-11-05 15:56:09.653851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:37.280 pt2 00:26:37.280 15:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.280 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:26:37.280 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:37.280 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:37.280 15:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.280 15:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.280 [2024-11-05 15:56:09.661203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:37.280 [2024-11-05 15:56:09.661240] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:37.280 [2024-11-05 15:56:09.661251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:26:37.280 [2024-11-05 15:56:09.661266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:37.280 [2024-11-05 15:56:09.661562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:37.280 [2024-11-05 15:56:09.661580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:37.280 [2024-11-05 15:56:09.661630] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:26:37.280 [2024-11-05 15:56:09.661645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:37.280 [2024-11-05 15:56:09.661736] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:37.280 [2024-11-05 15:56:09.661744] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:37.280 [2024-11-05 15:56:09.661942] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:26:37.281 [2024-11-05 15:56:09.662054] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:37.281 [2024-11-05 15:56:09.662060] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:26:37.281 [2024-11-05 15:56:09.662157] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:37.281 pt3 00:26:37.281 15:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.281 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:26:37.281 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:37.281 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:26:37.281 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:37.281 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:37.281 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:37.281 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:37.281 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:37.281 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:37.281 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:37.281 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:37.281 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:37.281 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:37.281 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:37.281 15:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.281 15:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.281 15:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.538 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:37.538 "name": "raid_bdev1", 00:26:37.538 "uuid": "bb4ca188-cb0f-4fa0-9451-7ed5ec7070a7", 00:26:37.538 "strip_size_kb": 64, 00:26:37.538 "state": "online", 00:26:37.538 "raid_level": "concat", 00:26:37.538 "superblock": true, 00:26:37.538 "num_base_bdevs": 3, 00:26:37.538 "num_base_bdevs_discovered": 3, 00:26:37.538 "num_base_bdevs_operational": 3, 00:26:37.538 "base_bdevs_list": [ 00:26:37.538 { 00:26:37.538 "name": "pt1", 00:26:37.538 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:37.538 "is_configured": true, 00:26:37.538 "data_offset": 2048, 00:26:37.538 "data_size": 63488 00:26:37.538 }, 00:26:37.538 { 00:26:37.538 "name": "pt2", 00:26:37.538 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:37.538 "is_configured": true, 00:26:37.538 "data_offset": 2048, 00:26:37.538 "data_size": 63488 00:26:37.538 }, 00:26:37.538 { 00:26:37.538 "name": "pt3", 00:26:37.538 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:37.538 "is_configured": true, 00:26:37.538 "data_offset": 2048, 00:26:37.538 "data_size": 63488 00:26:37.538 } 00:26:37.538 ] 00:26:37.538 }' 00:26:37.538 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:37.538 15:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.796 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:26:37.796 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:26:37.796 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:37.796 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:37.796 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:37.796 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:37.796 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:37.796 15:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:37.796 15:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.796 15:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.796 [2024-11-05 15:56:09.993532] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:37.796 15:56:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.796 15:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:37.796 "name": "raid_bdev1", 00:26:37.796 "aliases": [ 00:26:37.796 "bb4ca188-cb0f-4fa0-9451-7ed5ec7070a7" 00:26:37.796 ], 00:26:37.796 "product_name": "Raid Volume", 00:26:37.796 "block_size": 512, 00:26:37.796 "num_blocks": 190464, 00:26:37.796 "uuid": "bb4ca188-cb0f-4fa0-9451-7ed5ec7070a7", 00:26:37.796 "assigned_rate_limits": { 00:26:37.796 "rw_ios_per_sec": 0, 00:26:37.796 "rw_mbytes_per_sec": 0, 00:26:37.796 "r_mbytes_per_sec": 0, 00:26:37.796 "w_mbytes_per_sec": 0 00:26:37.796 }, 00:26:37.796 "claimed": false, 00:26:37.796 "zoned": false, 00:26:37.796 "supported_io_types": { 00:26:37.796 "read": true, 00:26:37.796 "write": true, 00:26:37.796 "unmap": true, 00:26:37.796 "flush": true, 00:26:37.796 "reset": true, 00:26:37.796 "nvme_admin": false, 00:26:37.796 "nvme_io": false, 00:26:37.796 "nvme_io_md": false, 00:26:37.796 "write_zeroes": true, 00:26:37.796 "zcopy": false, 00:26:37.796 "get_zone_info": false, 00:26:37.796 "zone_management": false, 00:26:37.796 "zone_append": false, 00:26:37.796 "compare": false, 00:26:37.796 "compare_and_write": false, 00:26:37.796 "abort": false, 00:26:37.796 "seek_hole": false, 00:26:37.796 "seek_data": false, 00:26:37.796 "copy": false, 00:26:37.796 "nvme_iov_md": false 00:26:37.796 }, 00:26:37.796 "memory_domains": [ 00:26:37.796 { 00:26:37.796 "dma_device_id": "system", 00:26:37.796 "dma_device_type": 1 00:26:37.796 }, 00:26:37.796 { 00:26:37.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:37.796 "dma_device_type": 2 00:26:37.796 }, 00:26:37.796 { 00:26:37.796 "dma_device_id": "system", 00:26:37.796 "dma_device_type": 1 00:26:37.796 }, 00:26:37.796 { 00:26:37.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:37.796 "dma_device_type": 2 00:26:37.796 }, 00:26:37.796 { 00:26:37.796 "dma_device_id": "system", 00:26:37.796 "dma_device_type": 1 00:26:37.796 }, 00:26:37.796 { 00:26:37.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:37.796 "dma_device_type": 2 00:26:37.796 } 00:26:37.796 ], 00:26:37.796 "driver_specific": { 00:26:37.796 "raid": { 00:26:37.796 "uuid": "bb4ca188-cb0f-4fa0-9451-7ed5ec7070a7", 00:26:37.796 "strip_size_kb": 64, 00:26:37.796 "state": "online", 00:26:37.796 "raid_level": "concat", 00:26:37.796 "superblock": true, 00:26:37.796 "num_base_bdevs": 3, 00:26:37.796 "num_base_bdevs_discovered": 3, 00:26:37.796 "num_base_bdevs_operational": 3, 00:26:37.796 "base_bdevs_list": [ 00:26:37.796 { 00:26:37.796 "name": "pt1", 00:26:37.796 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:37.796 "is_configured": true, 00:26:37.796 "data_offset": 2048, 00:26:37.796 "data_size": 63488 00:26:37.796 }, 00:26:37.796 { 00:26:37.796 "name": "pt2", 00:26:37.796 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:37.796 "is_configured": true, 00:26:37.796 "data_offset": 2048, 00:26:37.796 "data_size": 63488 00:26:37.796 }, 00:26:37.796 { 00:26:37.796 "name": "pt3", 00:26:37.796 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:37.796 "is_configured": true, 00:26:37.796 "data_offset": 2048, 00:26:37.796 "data_size": 63488 00:26:37.796 } 00:26:37.796 ] 00:26:37.796 } 00:26:37.796 } 00:26:37.796 }' 00:26:37.796 15:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:37.796 15:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:26:37.796 pt2 00:26:37.796 pt3' 00:26:37.796 15:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:37.796 15:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:37.796 15:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:37.796 15:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:26:37.796 15:56:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.796 15:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:37.796 15:56:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.796 15:56:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.796 15:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:37.796 15:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:37.796 15:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:37.796 15:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:26:37.796 15:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:37.796 15:56:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.796 15:56:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.796 15:56:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.796 15:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:37.796 15:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:37.796 15:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:37.796 15:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:37.796 15:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:26:37.796 15:56:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.796 15:56:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.796 15:56:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.796 15:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:37.796 15:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:37.796 15:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:37.796 15:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:26:37.796 15:56:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.796 15:56:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.796 [2024-11-05 15:56:10.173565] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:37.796 15:56:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.796 15:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' bb4ca188-cb0f-4fa0-9451-7ed5ec7070a7 '!=' bb4ca188-cb0f-4fa0-9451-7ed5ec7070a7 ']' 00:26:37.796 15:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:26:37.796 15:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:37.797 15:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:26:37.797 15:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65100 00:26:37.797 15:56:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 65100 ']' 00:26:37.797 15:56:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 65100 00:26:37.797 15:56:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:26:37.797 15:56:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:37.797 15:56:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65100 00:26:38.054 killing process with pid 65100 00:26:38.055 15:56:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:38.055 15:56:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:38.055 15:56:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65100' 00:26:38.055 15:56:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 65100 00:26:38.055 [2024-11-05 15:56:10.215759] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:38.055 [2024-11-05 15:56:10.215828] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:38.055 15:56:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 65100 00:26:38.055 [2024-11-05 15:56:10.215889] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:38.055 [2024-11-05 15:56:10.215900] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:26:38.055 [2024-11-05 15:56:10.363818] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:38.620 15:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:26:38.621 00:26:38.621 real 0m3.643s 00:26:38.621 user 0m5.352s 00:26:38.621 sys 0m0.563s 00:26:38.621 15:56:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:38.621 ************************************ 00:26:38.621 END TEST raid_superblock_test 00:26:38.621 15:56:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.621 ************************************ 00:26:38.621 15:56:10 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:26:38.621 15:56:10 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:26:38.621 15:56:10 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:38.621 15:56:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:38.621 ************************************ 00:26:38.621 START TEST raid_read_error_test 00:26:38.621 ************************************ 00:26:38.621 15:56:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 read 00:26:38.621 15:56:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:26:38.621 15:56:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:26:38.621 15:56:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:26:38.621 15:56:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:26:38.621 15:56:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:38.621 15:56:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:26:38.621 15:56:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:38.621 15:56:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:38.621 15:56:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:26:38.621 15:56:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:38.621 15:56:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:38.621 15:56:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:26:38.621 15:56:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:38.621 15:56:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:38.621 15:56:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:26:38.621 15:56:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:26:38.621 15:56:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:26:38.621 15:56:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:26:38.621 15:56:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:26:38.621 15:56:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:26:38.621 15:56:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:26:38.621 15:56:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:26:38.621 15:56:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:26:38.621 15:56:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:26:38.621 15:56:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:26:38.621 15:56:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.84CB6fMF4h 00:26:38.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:38.621 15:56:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65336 00:26:38.621 15:56:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65336 00:26:38.621 15:56:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 65336 ']' 00:26:38.621 15:56:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:38.621 15:56:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:38.621 15:56:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:38.621 15:56:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:38.621 15:56:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.621 15:56:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:26:38.621 [2024-11-05 15:56:11.035557] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:26:38.621 [2024-11-05 15:56:11.035676] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65336 ] 00:26:38.878 [2024-11-05 15:56:11.189452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.878 [2024-11-05 15:56:11.272106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.135 [2024-11-05 15:56:11.380591] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:39.135 [2024-11-05 15:56:11.380622] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:39.701 15:56:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:39.701 15:56:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:26:39.701 15:56:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:39.701 15:56:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:39.701 15:56:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.701 15:56:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.701 BaseBdev1_malloc 00:26:39.701 15:56:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.701 15:56:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:26:39.701 15:56:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.701 15:56:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.701 true 00:26:39.701 15:56:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.701 15:56:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:26:39.701 15:56:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.701 15:56:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.701 [2024-11-05 15:56:11.924988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:26:39.701 [2024-11-05 15:56:11.925027] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:39.701 [2024-11-05 15:56:11.925041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:26:39.701 [2024-11-05 15:56:11.925050] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:39.701 [2024-11-05 15:56:11.926766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:39.701 [2024-11-05 15:56:11.926799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:39.701 BaseBdev1 00:26:39.701 15:56:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.701 15:56:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:39.701 15:56:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:39.701 15:56:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.701 15:56:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.701 BaseBdev2_malloc 00:26:39.701 15:56:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.701 15:56:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:26:39.701 15:56:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.701 15:56:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.701 true 00:26:39.701 15:56:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.701 15:56:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:26:39.701 15:56:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.701 15:56:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.701 [2024-11-05 15:56:11.964171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:26:39.701 [2024-11-05 15:56:11.964312] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:39.701 [2024-11-05 15:56:11.964332] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:26:39.701 [2024-11-05 15:56:11.964340] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:39.701 [2024-11-05 15:56:11.966051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:39.701 [2024-11-05 15:56:11.966081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:39.701 BaseBdev2 00:26:39.701 15:56:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.701 15:56:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:39.701 15:56:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:39.701 15:56:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.701 15:56:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.701 BaseBdev3_malloc 00:26:39.701 15:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.701 15:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:26:39.701 15:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.701 15:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.701 true 00:26:39.701 15:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.701 15:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:26:39.701 15:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.701 15:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.701 [2024-11-05 15:56:12.015413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:26:39.701 [2024-11-05 15:56:12.015555] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:39.701 [2024-11-05 15:56:12.015574] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:26:39.701 [2024-11-05 15:56:12.015582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:39.701 [2024-11-05 15:56:12.017290] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:39.701 [2024-11-05 15:56:12.017316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:39.701 BaseBdev3 00:26:39.701 15:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.701 15:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:26:39.701 15:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.701 15:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.701 [2024-11-05 15:56:12.023471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:39.701 [2024-11-05 15:56:12.024957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:39.701 [2024-11-05 15:56:12.025015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:39.701 [2024-11-05 15:56:12.025169] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:26:39.701 [2024-11-05 15:56:12.025177] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:39.701 [2024-11-05 15:56:12.025375] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:26:39.701 [2024-11-05 15:56:12.025491] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:26:39.701 [2024-11-05 15:56:12.025501] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:26:39.702 [2024-11-05 15:56:12.025607] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:39.702 15:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.702 15:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:26:39.702 15:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:39.702 15:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:39.702 15:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:39.702 15:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:39.702 15:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:39.702 15:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:39.702 15:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:39.702 15:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:39.702 15:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:39.702 15:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:39.702 15:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:39.702 15:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.702 15:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.702 15:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.702 15:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:39.702 "name": "raid_bdev1", 00:26:39.702 "uuid": "ea6580bd-a9e5-48ad-8c6a-33184213baea", 00:26:39.702 "strip_size_kb": 64, 00:26:39.702 "state": "online", 00:26:39.702 "raid_level": "concat", 00:26:39.702 "superblock": true, 00:26:39.702 "num_base_bdevs": 3, 00:26:39.702 "num_base_bdevs_discovered": 3, 00:26:39.702 "num_base_bdevs_operational": 3, 00:26:39.702 "base_bdevs_list": [ 00:26:39.702 { 00:26:39.702 "name": "BaseBdev1", 00:26:39.702 "uuid": "f2f1ac8f-241c-54d8-b556-eaea73100108", 00:26:39.702 "is_configured": true, 00:26:39.702 "data_offset": 2048, 00:26:39.702 "data_size": 63488 00:26:39.702 }, 00:26:39.702 { 00:26:39.702 "name": "BaseBdev2", 00:26:39.702 "uuid": "a85a4995-2bfb-56d0-8e4c-af1453b7b539", 00:26:39.702 "is_configured": true, 00:26:39.702 "data_offset": 2048, 00:26:39.702 "data_size": 63488 00:26:39.702 }, 00:26:39.702 { 00:26:39.702 "name": "BaseBdev3", 00:26:39.702 "uuid": "3d4f3ac1-9000-5dec-9cc3-358c5d5f9b1c", 00:26:39.702 "is_configured": true, 00:26:39.702 "data_offset": 2048, 00:26:39.702 "data_size": 63488 00:26:39.702 } 00:26:39.702 ] 00:26:39.702 }' 00:26:39.702 15:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:39.702 15:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.960 15:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:26:39.960 15:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:26:40.217 [2024-11-05 15:56:12.428306] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:26:41.149 15:56:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:26:41.149 15:56:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.149 15:56:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:41.149 15:56:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.149 15:56:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:26:41.149 15:56:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:26:41.149 15:56:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:26:41.149 15:56:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:26:41.149 15:56:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:41.149 15:56:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:41.149 15:56:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:41.149 15:56:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:41.149 15:56:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:41.149 15:56:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:41.149 15:56:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:41.149 15:56:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:41.149 15:56:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:41.149 15:56:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:41.149 15:56:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.149 15:56:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:41.149 15:56:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:41.149 15:56:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.149 15:56:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:41.149 "name": "raid_bdev1", 00:26:41.149 "uuid": "ea6580bd-a9e5-48ad-8c6a-33184213baea", 00:26:41.149 "strip_size_kb": 64, 00:26:41.149 "state": "online", 00:26:41.149 "raid_level": "concat", 00:26:41.149 "superblock": true, 00:26:41.149 "num_base_bdevs": 3, 00:26:41.149 "num_base_bdevs_discovered": 3, 00:26:41.149 "num_base_bdevs_operational": 3, 00:26:41.149 "base_bdevs_list": [ 00:26:41.149 { 00:26:41.149 "name": "BaseBdev1", 00:26:41.149 "uuid": "f2f1ac8f-241c-54d8-b556-eaea73100108", 00:26:41.149 "is_configured": true, 00:26:41.149 "data_offset": 2048, 00:26:41.149 "data_size": 63488 00:26:41.149 }, 00:26:41.149 { 00:26:41.149 "name": "BaseBdev2", 00:26:41.149 "uuid": "a85a4995-2bfb-56d0-8e4c-af1453b7b539", 00:26:41.149 "is_configured": true, 00:26:41.149 "data_offset": 2048, 00:26:41.149 "data_size": 63488 00:26:41.149 }, 00:26:41.149 { 00:26:41.149 "name": "BaseBdev3", 00:26:41.149 "uuid": "3d4f3ac1-9000-5dec-9cc3-358c5d5f9b1c", 00:26:41.149 "is_configured": true, 00:26:41.149 "data_offset": 2048, 00:26:41.149 "data_size": 63488 00:26:41.149 } 00:26:41.149 ] 00:26:41.149 }' 00:26:41.149 15:56:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:41.149 15:56:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:41.407 15:56:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:41.407 15:56:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.407 15:56:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:41.407 [2024-11-05 15:56:13.667558] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:41.407 [2024-11-05 15:56:13.667584] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:41.407 [2024-11-05 15:56:13.669937] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:41.407 [2024-11-05 15:56:13.669973] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:41.407 [2024-11-05 15:56:13.670003] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:41.407 [2024-11-05 15:56:13.670012] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:26:41.407 { 00:26:41.407 "results": [ 00:26:41.407 { 00:26:41.407 "job": "raid_bdev1", 00:26:41.407 "core_mask": "0x1", 00:26:41.407 "workload": "randrw", 00:26:41.407 "percentage": 50, 00:26:41.407 "status": "finished", 00:26:41.407 "queue_depth": 1, 00:26:41.407 "io_size": 131072, 00:26:41.407 "runtime": 1.237701, 00:26:41.407 "iops": 18929.45065084378, 00:26:41.407 "mibps": 2366.1813313554726, 00:26:41.407 "io_failed": 1, 00:26:41.407 "io_timeout": 0, 00:26:41.407 "avg_latency_us": 72.3434534291999, 00:26:41.407 "min_latency_us": 25.403076923076924, 00:26:41.407 "max_latency_us": 1342.2276923076922 00:26:41.407 } 00:26:41.407 ], 00:26:41.407 "core_count": 1 00:26:41.407 } 00:26:41.407 15:56:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.407 15:56:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65336 00:26:41.407 15:56:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 65336 ']' 00:26:41.407 15:56:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 65336 00:26:41.407 15:56:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:26:41.407 15:56:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:41.407 15:56:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65336 00:26:41.407 killing process with pid 65336 00:26:41.407 15:56:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:41.407 15:56:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:41.407 15:56:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65336' 00:26:41.407 15:56:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 65336 00:26:41.407 [2024-11-05 15:56:13.701734] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:41.407 15:56:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 65336 00:26:41.407 [2024-11-05 15:56:13.811030] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:42.340 15:56:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:26:42.340 15:56:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.84CB6fMF4h 00:26:42.340 15:56:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:26:42.340 15:56:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.81 00:26:42.340 15:56:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:26:42.340 15:56:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:42.340 15:56:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:26:42.340 15:56:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.81 != \0\.\0\0 ]] 00:26:42.340 00:26:42.340 real 0m3.442s 00:26:42.340 user 0m4.168s 00:26:42.340 sys 0m0.363s 00:26:42.340 15:56:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:42.340 15:56:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:42.340 ************************************ 00:26:42.340 END TEST raid_read_error_test 00:26:42.340 ************************************ 00:26:42.340 15:56:14 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:26:42.340 15:56:14 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:26:42.340 15:56:14 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:42.340 15:56:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:42.340 ************************************ 00:26:42.340 START TEST raid_write_error_test 00:26:42.340 ************************************ 00:26:42.340 15:56:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 write 00:26:42.340 15:56:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:26:42.340 15:56:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:26:42.340 15:56:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:26:42.341 15:56:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:26:42.341 15:56:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:42.341 15:56:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:26:42.341 15:56:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:42.341 15:56:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:42.341 15:56:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:26:42.341 15:56:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:42.341 15:56:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:42.341 15:56:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:26:42.341 15:56:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:42.341 15:56:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:42.341 15:56:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:26:42.341 15:56:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:26:42.341 15:56:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:26:42.341 15:56:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:26:42.341 15:56:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:26:42.341 15:56:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:26:42.341 15:56:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:26:42.341 15:56:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:26:42.341 15:56:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:26:42.341 15:56:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:26:42.341 15:56:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:26:42.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:42.341 15:56:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1dikGjtjRQ 00:26:42.341 15:56:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65471 00:26:42.341 15:56:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:26:42.341 15:56:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65471 00:26:42.341 15:56:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 65471 ']' 00:26:42.341 15:56:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:42.341 15:56:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:42.341 15:56:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:42.341 15:56:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:42.341 15:56:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:42.341 [2024-11-05 15:56:14.524175] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:26:42.341 [2024-11-05 15:56:14.524391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65471 ] 00:26:42.341 [2024-11-05 15:56:14.677816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.599 [2024-11-05 15:56:14.776506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.599 [2024-11-05 15:56:14.911805] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:42.599 [2024-11-05 15:56:14.911863] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:43.165 15:56:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:43.165 15:56:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:26:43.165 15:56:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:43.165 15:56:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:43.165 15:56:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.165 15:56:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:43.165 BaseBdev1_malloc 00:26:43.165 15:56:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.165 15:56:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:26:43.165 15:56:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.165 15:56:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:43.165 true 00:26:43.165 15:56:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.165 15:56:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:26:43.165 15:56:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.165 15:56:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:43.165 [2024-11-05 15:56:15.415089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:26:43.165 [2024-11-05 15:56:15.415140] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:43.165 [2024-11-05 15:56:15.415159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:26:43.165 [2024-11-05 15:56:15.415170] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:43.165 [2024-11-05 15:56:15.417276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:43.165 [2024-11-05 15:56:15.417313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:43.165 BaseBdev1 00:26:43.165 15:56:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.165 15:56:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:43.165 15:56:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:43.165 15:56:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.165 15:56:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:43.165 BaseBdev2_malloc 00:26:43.165 15:56:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.165 15:56:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:26:43.165 15:56:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.165 15:56:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:43.165 true 00:26:43.165 15:56:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.165 15:56:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:26:43.165 15:56:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.165 15:56:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:43.165 [2024-11-05 15:56:15.458618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:26:43.165 [2024-11-05 15:56:15.458663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:43.165 [2024-11-05 15:56:15.458678] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:26:43.165 [2024-11-05 15:56:15.458687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:43.165 [2024-11-05 15:56:15.460732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:43.165 [2024-11-05 15:56:15.460870] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:43.165 BaseBdev2 00:26:43.165 15:56:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.165 15:56:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:43.166 15:56:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:43.166 15:56:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.166 15:56:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:43.166 BaseBdev3_malloc 00:26:43.166 15:56:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.166 15:56:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:26:43.166 15:56:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.166 15:56:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:43.166 true 00:26:43.166 15:56:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.166 15:56:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:26:43.166 15:56:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.166 15:56:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:43.166 [2024-11-05 15:56:15.516663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:26:43.166 [2024-11-05 15:56:15.516710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:43.166 [2024-11-05 15:56:15.516727] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:26:43.166 [2024-11-05 15:56:15.516737] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:43.166 [2024-11-05 15:56:15.518820] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:43.166 [2024-11-05 15:56:15.518870] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:43.166 BaseBdev3 00:26:43.166 15:56:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.166 15:56:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:26:43.166 15:56:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.166 15:56:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:43.166 [2024-11-05 15:56:15.524733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:43.166 [2024-11-05 15:56:15.526662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:43.166 [2024-11-05 15:56:15.526821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:43.166 [2024-11-05 15:56:15.527111] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:26:43.166 [2024-11-05 15:56:15.527175] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:43.166 [2024-11-05 15:56:15.527438] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:26:43.166 [2024-11-05 15:56:15.527632] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:26:43.166 [2024-11-05 15:56:15.527665] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:26:43.166 [2024-11-05 15:56:15.527932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:43.166 15:56:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.166 15:56:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:26:43.166 15:56:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:43.166 15:56:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:43.166 15:56:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:43.166 15:56:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:43.166 15:56:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:43.166 15:56:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:43.166 15:56:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:43.166 15:56:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:43.166 15:56:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:43.166 15:56:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:43.166 15:56:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.166 15:56:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:43.166 15:56:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:43.166 15:56:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.166 15:56:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:43.166 "name": "raid_bdev1", 00:26:43.166 "uuid": "c049a859-49be-4952-b521-be0197b5e07f", 00:26:43.166 "strip_size_kb": 64, 00:26:43.166 "state": "online", 00:26:43.166 "raid_level": "concat", 00:26:43.166 "superblock": true, 00:26:43.166 "num_base_bdevs": 3, 00:26:43.166 "num_base_bdevs_discovered": 3, 00:26:43.166 "num_base_bdevs_operational": 3, 00:26:43.166 "base_bdevs_list": [ 00:26:43.166 { 00:26:43.166 "name": "BaseBdev1", 00:26:43.166 "uuid": "0f6dd41e-4ea1-576c-bd68-f874a482806c", 00:26:43.166 "is_configured": true, 00:26:43.166 "data_offset": 2048, 00:26:43.166 "data_size": 63488 00:26:43.166 }, 00:26:43.166 { 00:26:43.166 "name": "BaseBdev2", 00:26:43.166 "uuid": "09c0dcd3-8fa4-5204-95ed-c0e545c47fae", 00:26:43.166 "is_configured": true, 00:26:43.166 "data_offset": 2048, 00:26:43.166 "data_size": 63488 00:26:43.166 }, 00:26:43.166 { 00:26:43.166 "name": "BaseBdev3", 00:26:43.166 "uuid": "7c1314bf-3b4b-5af3-90c5-12dcda59ca30", 00:26:43.166 "is_configured": true, 00:26:43.166 "data_offset": 2048, 00:26:43.166 "data_size": 63488 00:26:43.166 } 00:26:43.166 ] 00:26:43.166 }' 00:26:43.166 15:56:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:43.166 15:56:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:43.733 15:56:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:26:43.733 15:56:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:26:43.733 [2024-11-05 15:56:15.961778] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:26:44.695 15:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:26:44.695 15:56:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.695 15:56:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:44.695 15:56:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.695 15:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:26:44.695 15:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:26:44.695 15:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:26:44.695 15:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:26:44.695 15:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:44.695 15:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:44.695 15:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:44.695 15:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:44.695 15:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:44.695 15:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:44.695 15:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:44.695 15:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:44.695 15:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:44.695 15:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:44.695 15:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:44.695 15:56:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.695 15:56:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:44.695 15:56:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.695 15:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:44.695 "name": "raid_bdev1", 00:26:44.695 "uuid": "c049a859-49be-4952-b521-be0197b5e07f", 00:26:44.695 "strip_size_kb": 64, 00:26:44.695 "state": "online", 00:26:44.695 "raid_level": "concat", 00:26:44.695 "superblock": true, 00:26:44.695 "num_base_bdevs": 3, 00:26:44.695 "num_base_bdevs_discovered": 3, 00:26:44.695 "num_base_bdevs_operational": 3, 00:26:44.695 "base_bdevs_list": [ 00:26:44.695 { 00:26:44.695 "name": "BaseBdev1", 00:26:44.695 "uuid": "0f6dd41e-4ea1-576c-bd68-f874a482806c", 00:26:44.695 "is_configured": true, 00:26:44.695 "data_offset": 2048, 00:26:44.695 "data_size": 63488 00:26:44.695 }, 00:26:44.695 { 00:26:44.695 "name": "BaseBdev2", 00:26:44.695 "uuid": "09c0dcd3-8fa4-5204-95ed-c0e545c47fae", 00:26:44.695 "is_configured": true, 00:26:44.695 "data_offset": 2048, 00:26:44.695 "data_size": 63488 00:26:44.695 }, 00:26:44.695 { 00:26:44.695 "name": "BaseBdev3", 00:26:44.695 "uuid": "7c1314bf-3b4b-5af3-90c5-12dcda59ca30", 00:26:44.695 "is_configured": true, 00:26:44.695 "data_offset": 2048, 00:26:44.695 "data_size": 63488 00:26:44.695 } 00:26:44.695 ] 00:26:44.695 }' 00:26:44.695 15:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:44.695 15:56:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:44.953 15:56:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:44.953 15:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.953 15:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:44.953 [2024-11-05 15:56:17.215705] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:44.953 [2024-11-05 15:56:17.215853] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:44.953 [2024-11-05 15:56:17.218929] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:44.953 [2024-11-05 15:56:17.219056] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:44.953 [2024-11-05 15:56:17.219116] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:44.953 [2024-11-05 15:56:17.219506] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:26:44.953 { 00:26:44.953 "results": [ 00:26:44.953 { 00:26:44.953 "job": "raid_bdev1", 00:26:44.953 "core_mask": "0x1", 00:26:44.953 "workload": "randrw", 00:26:44.953 "percentage": 50, 00:26:44.953 "status": "finished", 00:26:44.953 "queue_depth": 1, 00:26:44.953 "io_size": 131072, 00:26:44.953 "runtime": 1.252139, 00:26:44.953 "iops": 15105.351722133086, 00:26:44.953 "mibps": 1888.1689652666357, 00:26:44.953 "io_failed": 1, 00:26:44.953 "io_timeout": 0, 00:26:44.953 "avg_latency_us": 90.47266125785396, 00:26:44.953 "min_latency_us": 33.28, 00:26:44.953 "max_latency_us": 1676.2092307692308 00:26:44.953 } 00:26:44.953 ], 00:26:44.953 "core_count": 1 00:26:44.953 } 00:26:44.953 15:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.953 15:56:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65471 00:26:44.953 15:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 65471 ']' 00:26:44.953 15:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 65471 00:26:44.953 15:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:26:44.953 15:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:44.953 15:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65471 00:26:44.953 killing process with pid 65471 00:26:44.953 15:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:44.953 15:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:44.953 15:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65471' 00:26:44.953 15:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 65471 00:26:44.953 [2024-11-05 15:56:17.247255] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:44.953 15:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 65471 00:26:45.209 [2024-11-05 15:56:17.388899] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:45.773 15:56:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1dikGjtjRQ 00:26:45.773 15:56:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:26:45.773 15:56:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:26:45.773 15:56:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.80 00:26:45.773 15:56:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:26:45.773 15:56:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:45.773 15:56:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:26:45.773 15:56:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.80 != \0\.\0\0 ]] 00:26:45.773 00:26:45.773 real 0m3.658s 00:26:45.773 user 0m4.381s 00:26:45.773 sys 0m0.397s 00:26:45.773 ************************************ 00:26:45.773 END TEST raid_write_error_test 00:26:45.773 ************************************ 00:26:45.773 15:56:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:45.773 15:56:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:45.773 15:56:18 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:26:45.773 15:56:18 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:26:45.773 15:56:18 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:26:45.773 15:56:18 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:45.773 15:56:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:45.773 ************************************ 00:26:45.773 START TEST raid_state_function_test 00:26:45.773 ************************************ 00:26:45.773 15:56:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 false 00:26:45.773 15:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:26:45.773 15:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:26:45.773 15:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:26:45.774 15:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:26:45.774 15:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:26:45.774 15:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:45.774 15:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:26:45.774 15:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:45.774 15:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:45.774 15:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:26:45.774 15:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:45.774 15:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:45.774 15:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:26:45.774 15:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:45.774 15:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:45.774 15:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:26:45.774 15:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:26:45.774 15:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:26:45.774 15:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:26:45.774 15:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:26:45.774 15:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:26:45.774 15:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:26:45.774 15:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:26:45.774 15:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:26:45.774 15:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:26:45.774 Process raid pid: 65598 00:26:45.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:45.774 15:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65598 00:26:45.774 15:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65598' 00:26:45.774 15:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65598 00:26:45.774 15:56:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 65598 ']' 00:26:45.774 15:56:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:45.774 15:56:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:45.774 15:56:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:45.774 15:56:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:45.774 15:56:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:45.774 15:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:26:46.031 [2024-11-05 15:56:18.207020] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:26:46.031 [2024-11-05 15:56:18.207102] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:46.031 [2024-11-05 15:56:18.356937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.031 [2024-11-05 15:56:18.438746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:46.288 [2024-11-05 15:56:18.547667] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:46.288 [2024-11-05 15:56:18.547810] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:46.854 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:46.854 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:26:46.854 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:46.854 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.854 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:46.854 [2024-11-05 15:56:19.037167] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:46.854 [2024-11-05 15:56:19.037296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:46.854 [2024-11-05 15:56:19.037351] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:46.854 [2024-11-05 15:56:19.037373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:46.854 [2024-11-05 15:56:19.037391] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:46.854 [2024-11-05 15:56:19.037407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:46.854 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.854 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:46.854 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:46.854 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:46.854 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:46.854 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:46.854 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:46.854 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:46.854 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:46.854 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:46.854 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:46.854 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:46.854 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:46.854 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.854 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:46.854 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.854 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:46.854 "name": "Existed_Raid", 00:26:46.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:46.854 "strip_size_kb": 0, 00:26:46.854 "state": "configuring", 00:26:46.854 "raid_level": "raid1", 00:26:46.854 "superblock": false, 00:26:46.854 "num_base_bdevs": 3, 00:26:46.854 "num_base_bdevs_discovered": 0, 00:26:46.854 "num_base_bdevs_operational": 3, 00:26:46.854 "base_bdevs_list": [ 00:26:46.854 { 00:26:46.855 "name": "BaseBdev1", 00:26:46.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:46.855 "is_configured": false, 00:26:46.855 "data_offset": 0, 00:26:46.855 "data_size": 0 00:26:46.855 }, 00:26:46.855 { 00:26:46.855 "name": "BaseBdev2", 00:26:46.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:46.855 "is_configured": false, 00:26:46.855 "data_offset": 0, 00:26:46.855 "data_size": 0 00:26:46.855 }, 00:26:46.855 { 00:26:46.855 "name": "BaseBdev3", 00:26:46.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:46.855 "is_configured": false, 00:26:46.855 "data_offset": 0, 00:26:46.855 "data_size": 0 00:26:46.855 } 00:26:46.855 ] 00:26:46.855 }' 00:26:46.855 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:46.855 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.113 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:47.113 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.113 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.113 [2024-11-05 15:56:19.369190] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:47.113 [2024-11-05 15:56:19.369217] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:26:47.113 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.113 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:47.113 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.113 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.113 [2024-11-05 15:56:19.377182] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:47.113 [2024-11-05 15:56:19.377214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:47.113 [2024-11-05 15:56:19.377221] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:47.113 [2024-11-05 15:56:19.377228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:47.113 [2024-11-05 15:56:19.377233] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:47.113 [2024-11-05 15:56:19.377240] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:47.113 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.113 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:47.114 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.114 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.114 [2024-11-05 15:56:19.404839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:47.114 BaseBdev1 00:26:47.114 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.114 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:26:47.114 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:26:47.114 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:47.114 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:26:47.114 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:47.114 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:47.114 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:47.114 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.114 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.114 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.114 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:47.114 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.114 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.114 [ 00:26:47.114 { 00:26:47.114 "name": "BaseBdev1", 00:26:47.114 "aliases": [ 00:26:47.114 "082c9a61-b886-4ca8-ab2d-eb8836ecc424" 00:26:47.114 ], 00:26:47.114 "product_name": "Malloc disk", 00:26:47.114 "block_size": 512, 00:26:47.114 "num_blocks": 65536, 00:26:47.114 "uuid": "082c9a61-b886-4ca8-ab2d-eb8836ecc424", 00:26:47.114 "assigned_rate_limits": { 00:26:47.114 "rw_ios_per_sec": 0, 00:26:47.114 "rw_mbytes_per_sec": 0, 00:26:47.114 "r_mbytes_per_sec": 0, 00:26:47.114 "w_mbytes_per_sec": 0 00:26:47.114 }, 00:26:47.114 "claimed": true, 00:26:47.114 "claim_type": "exclusive_write", 00:26:47.114 "zoned": false, 00:26:47.114 "supported_io_types": { 00:26:47.114 "read": true, 00:26:47.114 "write": true, 00:26:47.114 "unmap": true, 00:26:47.114 "flush": true, 00:26:47.114 "reset": true, 00:26:47.114 "nvme_admin": false, 00:26:47.114 "nvme_io": false, 00:26:47.114 "nvme_io_md": false, 00:26:47.114 "write_zeroes": true, 00:26:47.114 "zcopy": true, 00:26:47.114 "get_zone_info": false, 00:26:47.114 "zone_management": false, 00:26:47.114 "zone_append": false, 00:26:47.114 "compare": false, 00:26:47.114 "compare_and_write": false, 00:26:47.114 "abort": true, 00:26:47.114 "seek_hole": false, 00:26:47.114 "seek_data": false, 00:26:47.114 "copy": true, 00:26:47.114 "nvme_iov_md": false 00:26:47.114 }, 00:26:47.114 "memory_domains": [ 00:26:47.114 { 00:26:47.114 "dma_device_id": "system", 00:26:47.114 "dma_device_type": 1 00:26:47.114 }, 00:26:47.114 { 00:26:47.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:47.114 "dma_device_type": 2 00:26:47.114 } 00:26:47.114 ], 00:26:47.114 "driver_specific": {} 00:26:47.114 } 00:26:47.114 ] 00:26:47.114 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.114 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:26:47.114 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:47.114 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:47.114 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:47.114 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:47.114 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:47.114 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:47.114 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:47.114 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:47.114 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:47.114 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:47.114 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:47.114 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:47.114 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.114 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.114 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.114 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:47.114 "name": "Existed_Raid", 00:26:47.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:47.114 "strip_size_kb": 0, 00:26:47.114 "state": "configuring", 00:26:47.114 "raid_level": "raid1", 00:26:47.114 "superblock": false, 00:26:47.114 "num_base_bdevs": 3, 00:26:47.114 "num_base_bdevs_discovered": 1, 00:26:47.114 "num_base_bdevs_operational": 3, 00:26:47.114 "base_bdevs_list": [ 00:26:47.114 { 00:26:47.114 "name": "BaseBdev1", 00:26:47.114 "uuid": "082c9a61-b886-4ca8-ab2d-eb8836ecc424", 00:26:47.114 "is_configured": true, 00:26:47.114 "data_offset": 0, 00:26:47.114 "data_size": 65536 00:26:47.114 }, 00:26:47.114 { 00:26:47.114 "name": "BaseBdev2", 00:26:47.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:47.114 "is_configured": false, 00:26:47.114 "data_offset": 0, 00:26:47.114 "data_size": 0 00:26:47.114 }, 00:26:47.114 { 00:26:47.114 "name": "BaseBdev3", 00:26:47.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:47.114 "is_configured": false, 00:26:47.114 "data_offset": 0, 00:26:47.114 "data_size": 0 00:26:47.114 } 00:26:47.114 ] 00:26:47.114 }' 00:26:47.114 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:47.114 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.372 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:47.372 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.372 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.372 [2024-11-05 15:56:19.744947] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:47.372 [2024-11-05 15:56:19.745087] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:26:47.372 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.372 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:47.372 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.372 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.372 [2024-11-05 15:56:19.752986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:47.372 [2024-11-05 15:56:19.754469] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:47.372 [2024-11-05 15:56:19.754502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:47.372 [2024-11-05 15:56:19.754509] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:47.372 [2024-11-05 15:56:19.754518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:47.372 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.372 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:26:47.372 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:47.372 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:47.372 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:47.372 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:47.372 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:47.372 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:47.372 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:47.372 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:47.372 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:47.372 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:47.372 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:47.372 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:47.372 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:47.372 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.372 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.372 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.629 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:47.629 "name": "Existed_Raid", 00:26:47.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:47.629 "strip_size_kb": 0, 00:26:47.630 "state": "configuring", 00:26:47.630 "raid_level": "raid1", 00:26:47.630 "superblock": false, 00:26:47.630 "num_base_bdevs": 3, 00:26:47.630 "num_base_bdevs_discovered": 1, 00:26:47.630 "num_base_bdevs_operational": 3, 00:26:47.630 "base_bdevs_list": [ 00:26:47.630 { 00:26:47.630 "name": "BaseBdev1", 00:26:47.630 "uuid": "082c9a61-b886-4ca8-ab2d-eb8836ecc424", 00:26:47.630 "is_configured": true, 00:26:47.630 "data_offset": 0, 00:26:47.630 "data_size": 65536 00:26:47.630 }, 00:26:47.630 { 00:26:47.630 "name": "BaseBdev2", 00:26:47.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:47.630 "is_configured": false, 00:26:47.630 "data_offset": 0, 00:26:47.630 "data_size": 0 00:26:47.630 }, 00:26:47.630 { 00:26:47.630 "name": "BaseBdev3", 00:26:47.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:47.630 "is_configured": false, 00:26:47.630 "data_offset": 0, 00:26:47.630 "data_size": 0 00:26:47.630 } 00:26:47.630 ] 00:26:47.630 }' 00:26:47.630 15:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:47.630 15:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.887 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:47.887 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.887 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.887 [2024-11-05 15:56:20.071290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:47.887 BaseBdev2 00:26:47.887 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.887 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:26:47.887 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:26:47.887 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:47.887 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:26:47.887 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:47.887 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:47.888 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:47.888 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.888 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.888 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.888 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:47.888 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.888 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.888 [ 00:26:47.888 { 00:26:47.888 "name": "BaseBdev2", 00:26:47.888 "aliases": [ 00:26:47.888 "6edba73a-42dc-4ead-8aa4-06b58212c7c8" 00:26:47.888 ], 00:26:47.888 "product_name": "Malloc disk", 00:26:47.888 "block_size": 512, 00:26:47.888 "num_blocks": 65536, 00:26:47.888 "uuid": "6edba73a-42dc-4ead-8aa4-06b58212c7c8", 00:26:47.888 "assigned_rate_limits": { 00:26:47.888 "rw_ios_per_sec": 0, 00:26:47.888 "rw_mbytes_per_sec": 0, 00:26:47.888 "r_mbytes_per_sec": 0, 00:26:47.888 "w_mbytes_per_sec": 0 00:26:47.888 }, 00:26:47.888 "claimed": true, 00:26:47.888 "claim_type": "exclusive_write", 00:26:47.888 "zoned": false, 00:26:47.888 "supported_io_types": { 00:26:47.888 "read": true, 00:26:47.888 "write": true, 00:26:47.888 "unmap": true, 00:26:47.888 "flush": true, 00:26:47.888 "reset": true, 00:26:47.888 "nvme_admin": false, 00:26:47.888 "nvme_io": false, 00:26:47.888 "nvme_io_md": false, 00:26:47.888 "write_zeroes": true, 00:26:47.888 "zcopy": true, 00:26:47.888 "get_zone_info": false, 00:26:47.888 "zone_management": false, 00:26:47.888 "zone_append": false, 00:26:47.888 "compare": false, 00:26:47.888 "compare_and_write": false, 00:26:47.888 "abort": true, 00:26:47.888 "seek_hole": false, 00:26:47.888 "seek_data": false, 00:26:47.888 "copy": true, 00:26:47.888 "nvme_iov_md": false 00:26:47.888 }, 00:26:47.888 "memory_domains": [ 00:26:47.888 { 00:26:47.888 "dma_device_id": "system", 00:26:47.888 "dma_device_type": 1 00:26:47.888 }, 00:26:47.888 { 00:26:47.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:47.888 "dma_device_type": 2 00:26:47.888 } 00:26:47.888 ], 00:26:47.888 "driver_specific": {} 00:26:47.888 } 00:26:47.888 ] 00:26:47.888 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.888 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:26:47.888 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:47.888 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:47.888 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:47.888 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:47.888 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:47.888 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:47.888 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:47.888 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:47.888 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:47.888 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:47.888 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:47.888 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:47.888 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:47.888 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:47.888 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.888 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.888 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.888 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:47.888 "name": "Existed_Raid", 00:26:47.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:47.888 "strip_size_kb": 0, 00:26:47.888 "state": "configuring", 00:26:47.888 "raid_level": "raid1", 00:26:47.888 "superblock": false, 00:26:47.888 "num_base_bdevs": 3, 00:26:47.888 "num_base_bdevs_discovered": 2, 00:26:47.888 "num_base_bdevs_operational": 3, 00:26:47.888 "base_bdevs_list": [ 00:26:47.888 { 00:26:47.888 "name": "BaseBdev1", 00:26:47.888 "uuid": "082c9a61-b886-4ca8-ab2d-eb8836ecc424", 00:26:47.888 "is_configured": true, 00:26:47.888 "data_offset": 0, 00:26:47.888 "data_size": 65536 00:26:47.888 }, 00:26:47.888 { 00:26:47.888 "name": "BaseBdev2", 00:26:47.888 "uuid": "6edba73a-42dc-4ead-8aa4-06b58212c7c8", 00:26:47.888 "is_configured": true, 00:26:47.888 "data_offset": 0, 00:26:47.888 "data_size": 65536 00:26:47.888 }, 00:26:47.888 { 00:26:47.888 "name": "BaseBdev3", 00:26:47.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:47.888 "is_configured": false, 00:26:47.888 "data_offset": 0, 00:26:47.888 "data_size": 0 00:26:47.888 } 00:26:47.888 ] 00:26:47.888 }' 00:26:47.888 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:47.888 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:48.147 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:26:48.147 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.147 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:48.147 [2024-11-05 15:56:20.432329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:48.147 [2024-11-05 15:56:20.432364] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:48.147 [2024-11-05 15:56:20.432374] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:26:48.147 [2024-11-05 15:56:20.432593] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:26:48.147 [2024-11-05 15:56:20.432715] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:48.147 [2024-11-05 15:56:20.432722] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:26:48.147 [2024-11-05 15:56:20.432945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:48.147 BaseBdev3 00:26:48.147 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.147 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:26:48.147 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:26:48.147 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:48.147 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:26:48.147 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:48.147 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:48.147 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:48.147 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.147 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:48.147 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.147 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:48.147 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.147 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:48.147 [ 00:26:48.147 { 00:26:48.147 "name": "BaseBdev3", 00:26:48.147 "aliases": [ 00:26:48.147 "bcfc1ce5-df85-47d1-94b7-b226cf2f3ed3" 00:26:48.147 ], 00:26:48.147 "product_name": "Malloc disk", 00:26:48.147 "block_size": 512, 00:26:48.147 "num_blocks": 65536, 00:26:48.147 "uuid": "bcfc1ce5-df85-47d1-94b7-b226cf2f3ed3", 00:26:48.147 "assigned_rate_limits": { 00:26:48.147 "rw_ios_per_sec": 0, 00:26:48.147 "rw_mbytes_per_sec": 0, 00:26:48.147 "r_mbytes_per_sec": 0, 00:26:48.147 "w_mbytes_per_sec": 0 00:26:48.147 }, 00:26:48.147 "claimed": true, 00:26:48.147 "claim_type": "exclusive_write", 00:26:48.147 "zoned": false, 00:26:48.147 "supported_io_types": { 00:26:48.147 "read": true, 00:26:48.147 "write": true, 00:26:48.147 "unmap": true, 00:26:48.147 "flush": true, 00:26:48.147 "reset": true, 00:26:48.147 "nvme_admin": false, 00:26:48.147 "nvme_io": false, 00:26:48.147 "nvme_io_md": false, 00:26:48.147 "write_zeroes": true, 00:26:48.147 "zcopy": true, 00:26:48.147 "get_zone_info": false, 00:26:48.147 "zone_management": false, 00:26:48.147 "zone_append": false, 00:26:48.147 "compare": false, 00:26:48.147 "compare_and_write": false, 00:26:48.147 "abort": true, 00:26:48.147 "seek_hole": false, 00:26:48.147 "seek_data": false, 00:26:48.147 "copy": true, 00:26:48.147 "nvme_iov_md": false 00:26:48.148 }, 00:26:48.148 "memory_domains": [ 00:26:48.148 { 00:26:48.148 "dma_device_id": "system", 00:26:48.148 "dma_device_type": 1 00:26:48.148 }, 00:26:48.148 { 00:26:48.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:48.148 "dma_device_type": 2 00:26:48.148 } 00:26:48.148 ], 00:26:48.148 "driver_specific": {} 00:26:48.148 } 00:26:48.148 ] 00:26:48.148 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.148 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:26:48.148 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:48.148 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:48.148 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:26:48.148 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:48.148 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:48.148 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:48.148 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:48.148 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:48.148 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:48.148 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:48.148 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:48.148 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:48.148 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:48.148 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:48.148 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.148 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:48.148 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.148 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:48.148 "name": "Existed_Raid", 00:26:48.148 "uuid": "dc0e3af7-9afa-4be0-ae33-e49781596a6d", 00:26:48.148 "strip_size_kb": 0, 00:26:48.148 "state": "online", 00:26:48.148 "raid_level": "raid1", 00:26:48.148 "superblock": false, 00:26:48.148 "num_base_bdevs": 3, 00:26:48.148 "num_base_bdevs_discovered": 3, 00:26:48.148 "num_base_bdevs_operational": 3, 00:26:48.148 "base_bdevs_list": [ 00:26:48.148 { 00:26:48.148 "name": "BaseBdev1", 00:26:48.148 "uuid": "082c9a61-b886-4ca8-ab2d-eb8836ecc424", 00:26:48.148 "is_configured": true, 00:26:48.148 "data_offset": 0, 00:26:48.148 "data_size": 65536 00:26:48.148 }, 00:26:48.148 { 00:26:48.148 "name": "BaseBdev2", 00:26:48.148 "uuid": "6edba73a-42dc-4ead-8aa4-06b58212c7c8", 00:26:48.148 "is_configured": true, 00:26:48.148 "data_offset": 0, 00:26:48.148 "data_size": 65536 00:26:48.148 }, 00:26:48.148 { 00:26:48.148 "name": "BaseBdev3", 00:26:48.148 "uuid": "bcfc1ce5-df85-47d1-94b7-b226cf2f3ed3", 00:26:48.148 "is_configured": true, 00:26:48.148 "data_offset": 0, 00:26:48.148 "data_size": 65536 00:26:48.148 } 00:26:48.148 ] 00:26:48.148 }' 00:26:48.148 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:48.148 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:48.406 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:26:48.406 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:48.406 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:48.406 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:48.406 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:48.406 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:48.406 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:48.406 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:48.406 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.406 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:48.406 [2024-11-05 15:56:20.756697] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:48.406 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.406 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:48.406 "name": "Existed_Raid", 00:26:48.406 "aliases": [ 00:26:48.406 "dc0e3af7-9afa-4be0-ae33-e49781596a6d" 00:26:48.406 ], 00:26:48.406 "product_name": "Raid Volume", 00:26:48.406 "block_size": 512, 00:26:48.406 "num_blocks": 65536, 00:26:48.406 "uuid": "dc0e3af7-9afa-4be0-ae33-e49781596a6d", 00:26:48.406 "assigned_rate_limits": { 00:26:48.406 "rw_ios_per_sec": 0, 00:26:48.406 "rw_mbytes_per_sec": 0, 00:26:48.406 "r_mbytes_per_sec": 0, 00:26:48.406 "w_mbytes_per_sec": 0 00:26:48.406 }, 00:26:48.406 "claimed": false, 00:26:48.406 "zoned": false, 00:26:48.406 "supported_io_types": { 00:26:48.406 "read": true, 00:26:48.406 "write": true, 00:26:48.406 "unmap": false, 00:26:48.406 "flush": false, 00:26:48.406 "reset": true, 00:26:48.406 "nvme_admin": false, 00:26:48.406 "nvme_io": false, 00:26:48.406 "nvme_io_md": false, 00:26:48.406 "write_zeroes": true, 00:26:48.406 "zcopy": false, 00:26:48.406 "get_zone_info": false, 00:26:48.406 "zone_management": false, 00:26:48.406 "zone_append": false, 00:26:48.406 "compare": false, 00:26:48.406 "compare_and_write": false, 00:26:48.406 "abort": false, 00:26:48.406 "seek_hole": false, 00:26:48.406 "seek_data": false, 00:26:48.406 "copy": false, 00:26:48.406 "nvme_iov_md": false 00:26:48.406 }, 00:26:48.406 "memory_domains": [ 00:26:48.406 { 00:26:48.406 "dma_device_id": "system", 00:26:48.406 "dma_device_type": 1 00:26:48.406 }, 00:26:48.406 { 00:26:48.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:48.406 "dma_device_type": 2 00:26:48.406 }, 00:26:48.406 { 00:26:48.406 "dma_device_id": "system", 00:26:48.406 "dma_device_type": 1 00:26:48.406 }, 00:26:48.406 { 00:26:48.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:48.406 "dma_device_type": 2 00:26:48.406 }, 00:26:48.406 { 00:26:48.406 "dma_device_id": "system", 00:26:48.406 "dma_device_type": 1 00:26:48.406 }, 00:26:48.406 { 00:26:48.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:48.406 "dma_device_type": 2 00:26:48.406 } 00:26:48.406 ], 00:26:48.406 "driver_specific": { 00:26:48.406 "raid": { 00:26:48.406 "uuid": "dc0e3af7-9afa-4be0-ae33-e49781596a6d", 00:26:48.406 "strip_size_kb": 0, 00:26:48.406 "state": "online", 00:26:48.406 "raid_level": "raid1", 00:26:48.406 "superblock": false, 00:26:48.406 "num_base_bdevs": 3, 00:26:48.406 "num_base_bdevs_discovered": 3, 00:26:48.406 "num_base_bdevs_operational": 3, 00:26:48.406 "base_bdevs_list": [ 00:26:48.406 { 00:26:48.406 "name": "BaseBdev1", 00:26:48.406 "uuid": "082c9a61-b886-4ca8-ab2d-eb8836ecc424", 00:26:48.406 "is_configured": true, 00:26:48.406 "data_offset": 0, 00:26:48.406 "data_size": 65536 00:26:48.406 }, 00:26:48.406 { 00:26:48.406 "name": "BaseBdev2", 00:26:48.406 "uuid": "6edba73a-42dc-4ead-8aa4-06b58212c7c8", 00:26:48.406 "is_configured": true, 00:26:48.406 "data_offset": 0, 00:26:48.406 "data_size": 65536 00:26:48.406 }, 00:26:48.406 { 00:26:48.406 "name": "BaseBdev3", 00:26:48.406 "uuid": "bcfc1ce5-df85-47d1-94b7-b226cf2f3ed3", 00:26:48.406 "is_configured": true, 00:26:48.406 "data_offset": 0, 00:26:48.406 "data_size": 65536 00:26:48.406 } 00:26:48.406 ] 00:26:48.406 } 00:26:48.406 } 00:26:48.406 }' 00:26:48.406 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:48.406 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:26:48.406 BaseBdev2 00:26:48.406 BaseBdev3' 00:26:48.406 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:48.664 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:48.664 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:48.664 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:26:48.664 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.664 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:48.664 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:48.664 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.664 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:48.664 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:48.664 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:48.664 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:48.664 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:48.664 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.664 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:48.664 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.664 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:48.665 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:48.665 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:48.665 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:26:48.665 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.665 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:48.665 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:48.665 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.665 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:48.665 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:48.665 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:48.665 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.665 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:48.665 [2024-11-05 15:56:20.948511] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:48.665 15:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.665 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:26:48.665 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:26:48.665 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:48.665 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:26:48.665 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:26:48.665 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:26:48.665 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:48.665 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:48.665 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:48.665 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:48.665 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:48.665 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:48.665 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:48.665 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:48.665 15:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:48.665 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:48.665 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.665 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:48.665 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:48.665 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.665 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:48.665 "name": "Existed_Raid", 00:26:48.665 "uuid": "dc0e3af7-9afa-4be0-ae33-e49781596a6d", 00:26:48.665 "strip_size_kb": 0, 00:26:48.665 "state": "online", 00:26:48.665 "raid_level": "raid1", 00:26:48.665 "superblock": false, 00:26:48.665 "num_base_bdevs": 3, 00:26:48.665 "num_base_bdevs_discovered": 2, 00:26:48.665 "num_base_bdevs_operational": 2, 00:26:48.665 "base_bdevs_list": [ 00:26:48.665 { 00:26:48.665 "name": null, 00:26:48.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:48.665 "is_configured": false, 00:26:48.665 "data_offset": 0, 00:26:48.665 "data_size": 65536 00:26:48.665 }, 00:26:48.665 { 00:26:48.665 "name": "BaseBdev2", 00:26:48.665 "uuid": "6edba73a-42dc-4ead-8aa4-06b58212c7c8", 00:26:48.665 "is_configured": true, 00:26:48.665 "data_offset": 0, 00:26:48.665 "data_size": 65536 00:26:48.665 }, 00:26:48.665 { 00:26:48.665 "name": "BaseBdev3", 00:26:48.665 "uuid": "bcfc1ce5-df85-47d1-94b7-b226cf2f3ed3", 00:26:48.665 "is_configured": true, 00:26:48.665 "data_offset": 0, 00:26:48.665 "data_size": 65536 00:26:48.665 } 00:26:48.665 ] 00:26:48.665 }' 00:26:48.665 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:48.665 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:48.922 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:26:48.922 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:48.922 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:48.922 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.922 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:48.922 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:48.922 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:49.181 [2024-11-05 15:56:21.350828] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:49.181 [2024-11-05 15:56:21.428410] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:49.181 [2024-11-05 15:56:21.428482] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:49.181 [2024-11-05 15:56:21.473391] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:49.181 [2024-11-05 15:56:21.473493] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:49.181 [2024-11-05 15:56:21.473547] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:49.181 BaseBdev2 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:49.181 [ 00:26:49.181 { 00:26:49.181 "name": "BaseBdev2", 00:26:49.181 "aliases": [ 00:26:49.181 "fad0ab4d-4487-4080-8341-4d17f9bc0fae" 00:26:49.181 ], 00:26:49.181 "product_name": "Malloc disk", 00:26:49.181 "block_size": 512, 00:26:49.181 "num_blocks": 65536, 00:26:49.181 "uuid": "fad0ab4d-4487-4080-8341-4d17f9bc0fae", 00:26:49.181 "assigned_rate_limits": { 00:26:49.181 "rw_ios_per_sec": 0, 00:26:49.181 "rw_mbytes_per_sec": 0, 00:26:49.181 "r_mbytes_per_sec": 0, 00:26:49.181 "w_mbytes_per_sec": 0 00:26:49.181 }, 00:26:49.181 "claimed": false, 00:26:49.181 "zoned": false, 00:26:49.181 "supported_io_types": { 00:26:49.181 "read": true, 00:26:49.181 "write": true, 00:26:49.181 "unmap": true, 00:26:49.181 "flush": true, 00:26:49.181 "reset": true, 00:26:49.181 "nvme_admin": false, 00:26:49.181 "nvme_io": false, 00:26:49.181 "nvme_io_md": false, 00:26:49.181 "write_zeroes": true, 00:26:49.181 "zcopy": true, 00:26:49.181 "get_zone_info": false, 00:26:49.181 "zone_management": false, 00:26:49.181 "zone_append": false, 00:26:49.181 "compare": false, 00:26:49.181 "compare_and_write": false, 00:26:49.181 "abort": true, 00:26:49.181 "seek_hole": false, 00:26:49.181 "seek_data": false, 00:26:49.181 "copy": true, 00:26:49.181 "nvme_iov_md": false 00:26:49.181 }, 00:26:49.181 "memory_domains": [ 00:26:49.181 { 00:26:49.181 "dma_device_id": "system", 00:26:49.181 "dma_device_type": 1 00:26:49.181 }, 00:26:49.181 { 00:26:49.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:49.181 "dma_device_type": 2 00:26:49.181 } 00:26:49.181 ], 00:26:49.181 "driver_specific": {} 00:26:49.181 } 00:26:49.181 ] 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:49.181 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:26:49.182 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.182 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:49.182 BaseBdev3 00:26:49.182 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.182 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:26:49.182 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:26:49.182 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:49.182 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:26:49.182 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:49.182 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:49.182 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:49.182 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.182 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:49.182 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.182 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:49.182 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.182 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:49.440 [ 00:26:49.440 { 00:26:49.440 "name": "BaseBdev3", 00:26:49.440 "aliases": [ 00:26:49.440 "92a5097f-0ac6-4dce-af07-e2fe91b2b309" 00:26:49.440 ], 00:26:49.440 "product_name": "Malloc disk", 00:26:49.440 "block_size": 512, 00:26:49.440 "num_blocks": 65536, 00:26:49.440 "uuid": "92a5097f-0ac6-4dce-af07-e2fe91b2b309", 00:26:49.440 "assigned_rate_limits": { 00:26:49.440 "rw_ios_per_sec": 0, 00:26:49.440 "rw_mbytes_per_sec": 0, 00:26:49.440 "r_mbytes_per_sec": 0, 00:26:49.440 "w_mbytes_per_sec": 0 00:26:49.440 }, 00:26:49.440 "claimed": false, 00:26:49.440 "zoned": false, 00:26:49.440 "supported_io_types": { 00:26:49.440 "read": true, 00:26:49.440 "write": true, 00:26:49.440 "unmap": true, 00:26:49.440 "flush": true, 00:26:49.440 "reset": true, 00:26:49.440 "nvme_admin": false, 00:26:49.440 "nvme_io": false, 00:26:49.440 "nvme_io_md": false, 00:26:49.440 "write_zeroes": true, 00:26:49.440 "zcopy": true, 00:26:49.440 "get_zone_info": false, 00:26:49.440 "zone_management": false, 00:26:49.440 "zone_append": false, 00:26:49.440 "compare": false, 00:26:49.440 "compare_and_write": false, 00:26:49.440 "abort": true, 00:26:49.440 "seek_hole": false, 00:26:49.440 "seek_data": false, 00:26:49.440 "copy": true, 00:26:49.440 "nvme_iov_md": false 00:26:49.440 }, 00:26:49.440 "memory_domains": [ 00:26:49.440 { 00:26:49.440 "dma_device_id": "system", 00:26:49.440 "dma_device_type": 1 00:26:49.440 }, 00:26:49.440 { 00:26:49.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:49.440 "dma_device_type": 2 00:26:49.440 } 00:26:49.440 ], 00:26:49.440 "driver_specific": {} 00:26:49.440 } 00:26:49.440 ] 00:26:49.440 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.440 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:26:49.440 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:49.440 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:49.440 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:49.440 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.440 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:49.440 [2024-11-05 15:56:21.611387] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:49.440 [2024-11-05 15:56:21.611503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:49.440 [2024-11-05 15:56:21.611524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:49.440 [2024-11-05 15:56:21.613036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:49.440 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.440 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:49.440 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:49.441 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:49.441 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:49.441 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:49.441 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:49.441 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:49.441 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:49.441 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:49.441 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:49.441 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:49.441 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.441 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:49.441 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:49.441 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.441 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:49.441 "name": "Existed_Raid", 00:26:49.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:49.441 "strip_size_kb": 0, 00:26:49.441 "state": "configuring", 00:26:49.441 "raid_level": "raid1", 00:26:49.441 "superblock": false, 00:26:49.441 "num_base_bdevs": 3, 00:26:49.441 "num_base_bdevs_discovered": 2, 00:26:49.441 "num_base_bdevs_operational": 3, 00:26:49.441 "base_bdevs_list": [ 00:26:49.441 { 00:26:49.441 "name": "BaseBdev1", 00:26:49.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:49.441 "is_configured": false, 00:26:49.441 "data_offset": 0, 00:26:49.441 "data_size": 0 00:26:49.441 }, 00:26:49.441 { 00:26:49.441 "name": "BaseBdev2", 00:26:49.441 "uuid": "fad0ab4d-4487-4080-8341-4d17f9bc0fae", 00:26:49.441 "is_configured": true, 00:26:49.441 "data_offset": 0, 00:26:49.441 "data_size": 65536 00:26:49.441 }, 00:26:49.441 { 00:26:49.441 "name": "BaseBdev3", 00:26:49.441 "uuid": "92a5097f-0ac6-4dce-af07-e2fe91b2b309", 00:26:49.441 "is_configured": true, 00:26:49.441 "data_offset": 0, 00:26:49.441 "data_size": 65536 00:26:49.441 } 00:26:49.441 ] 00:26:49.441 }' 00:26:49.441 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:49.441 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:49.699 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:26:49.699 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.699 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:49.699 [2024-11-05 15:56:21.911454] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:49.699 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.699 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:49.699 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:49.699 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:49.699 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:49.699 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:49.699 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:49.699 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:49.699 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:49.699 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:49.699 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:49.699 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:49.699 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:49.699 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.699 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:49.699 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.699 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:49.699 "name": "Existed_Raid", 00:26:49.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:49.699 "strip_size_kb": 0, 00:26:49.699 "state": "configuring", 00:26:49.699 "raid_level": "raid1", 00:26:49.699 "superblock": false, 00:26:49.699 "num_base_bdevs": 3, 00:26:49.699 "num_base_bdevs_discovered": 1, 00:26:49.699 "num_base_bdevs_operational": 3, 00:26:49.699 "base_bdevs_list": [ 00:26:49.699 { 00:26:49.699 "name": "BaseBdev1", 00:26:49.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:49.699 "is_configured": false, 00:26:49.699 "data_offset": 0, 00:26:49.699 "data_size": 0 00:26:49.699 }, 00:26:49.699 { 00:26:49.699 "name": null, 00:26:49.699 "uuid": "fad0ab4d-4487-4080-8341-4d17f9bc0fae", 00:26:49.699 "is_configured": false, 00:26:49.699 "data_offset": 0, 00:26:49.699 "data_size": 65536 00:26:49.699 }, 00:26:49.699 { 00:26:49.699 "name": "BaseBdev3", 00:26:49.699 "uuid": "92a5097f-0ac6-4dce-af07-e2fe91b2b309", 00:26:49.699 "is_configured": true, 00:26:49.699 "data_offset": 0, 00:26:49.699 "data_size": 65536 00:26:49.699 } 00:26:49.699 ] 00:26:49.699 }' 00:26:49.699 15:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:49.699 15:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:49.957 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:49.957 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.957 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:49.957 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:49.957 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.957 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:26:49.957 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:49.957 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.957 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:49.957 [2024-11-05 15:56:22.253128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:49.957 BaseBdev1 00:26:49.957 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.957 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:26:49.957 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:26:49.957 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:49.957 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:26:49.957 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:49.957 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:49.957 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:49.957 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.957 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:49.957 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.957 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:49.957 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.958 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:49.958 [ 00:26:49.958 { 00:26:49.958 "name": "BaseBdev1", 00:26:49.958 "aliases": [ 00:26:49.958 "8d14f099-722d-4b3c-abce-eab2002f8adc" 00:26:49.958 ], 00:26:49.958 "product_name": "Malloc disk", 00:26:49.958 "block_size": 512, 00:26:49.958 "num_blocks": 65536, 00:26:49.958 "uuid": "8d14f099-722d-4b3c-abce-eab2002f8adc", 00:26:49.958 "assigned_rate_limits": { 00:26:49.958 "rw_ios_per_sec": 0, 00:26:49.958 "rw_mbytes_per_sec": 0, 00:26:49.958 "r_mbytes_per_sec": 0, 00:26:49.958 "w_mbytes_per_sec": 0 00:26:49.958 }, 00:26:49.958 "claimed": true, 00:26:49.958 "claim_type": "exclusive_write", 00:26:49.958 "zoned": false, 00:26:49.958 "supported_io_types": { 00:26:49.958 "read": true, 00:26:49.958 "write": true, 00:26:49.958 "unmap": true, 00:26:49.958 "flush": true, 00:26:49.958 "reset": true, 00:26:49.958 "nvme_admin": false, 00:26:49.958 "nvme_io": false, 00:26:49.958 "nvme_io_md": false, 00:26:49.958 "write_zeroes": true, 00:26:49.958 "zcopy": true, 00:26:49.958 "get_zone_info": false, 00:26:49.958 "zone_management": false, 00:26:49.958 "zone_append": false, 00:26:49.958 "compare": false, 00:26:49.958 "compare_and_write": false, 00:26:49.958 "abort": true, 00:26:49.958 "seek_hole": false, 00:26:49.958 "seek_data": false, 00:26:49.958 "copy": true, 00:26:49.958 "nvme_iov_md": false 00:26:49.958 }, 00:26:49.958 "memory_domains": [ 00:26:49.958 { 00:26:49.958 "dma_device_id": "system", 00:26:49.958 "dma_device_type": 1 00:26:49.958 }, 00:26:49.958 { 00:26:49.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:49.958 "dma_device_type": 2 00:26:49.958 } 00:26:49.958 ], 00:26:49.958 "driver_specific": {} 00:26:49.958 } 00:26:49.958 ] 00:26:49.958 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.958 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:26:49.958 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:49.958 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:49.958 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:49.958 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:49.958 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:49.958 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:49.958 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:49.958 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:49.958 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:49.958 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:49.958 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:49.958 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:49.958 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.958 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:49.958 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.958 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:49.958 "name": "Existed_Raid", 00:26:49.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:49.958 "strip_size_kb": 0, 00:26:49.958 "state": "configuring", 00:26:49.958 "raid_level": "raid1", 00:26:49.958 "superblock": false, 00:26:49.958 "num_base_bdevs": 3, 00:26:49.958 "num_base_bdevs_discovered": 2, 00:26:49.958 "num_base_bdevs_operational": 3, 00:26:49.958 "base_bdevs_list": [ 00:26:49.958 { 00:26:49.958 "name": "BaseBdev1", 00:26:49.958 "uuid": "8d14f099-722d-4b3c-abce-eab2002f8adc", 00:26:49.958 "is_configured": true, 00:26:49.958 "data_offset": 0, 00:26:49.958 "data_size": 65536 00:26:49.958 }, 00:26:49.958 { 00:26:49.958 "name": null, 00:26:49.958 "uuid": "fad0ab4d-4487-4080-8341-4d17f9bc0fae", 00:26:49.958 "is_configured": false, 00:26:49.958 "data_offset": 0, 00:26:49.958 "data_size": 65536 00:26:49.958 }, 00:26:49.958 { 00:26:49.958 "name": "BaseBdev3", 00:26:49.958 "uuid": "92a5097f-0ac6-4dce-af07-e2fe91b2b309", 00:26:49.958 "is_configured": true, 00:26:49.958 "data_offset": 0, 00:26:49.958 "data_size": 65536 00:26:49.958 } 00:26:49.958 ] 00:26:49.958 }' 00:26:49.958 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:49.958 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:50.216 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:50.216 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.216 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:50.217 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:50.217 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.217 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:26:50.217 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:26:50.217 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.217 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:50.217 [2024-11-05 15:56:22.601218] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:50.217 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.217 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:50.217 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:50.217 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:50.217 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:50.217 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:50.217 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:50.217 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:50.217 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:50.217 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:50.217 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:50.217 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:50.217 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:50.217 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.217 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:50.217 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.475 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:50.475 "name": "Existed_Raid", 00:26:50.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:50.475 "strip_size_kb": 0, 00:26:50.475 "state": "configuring", 00:26:50.475 "raid_level": "raid1", 00:26:50.475 "superblock": false, 00:26:50.475 "num_base_bdevs": 3, 00:26:50.475 "num_base_bdevs_discovered": 1, 00:26:50.475 "num_base_bdevs_operational": 3, 00:26:50.475 "base_bdevs_list": [ 00:26:50.475 { 00:26:50.475 "name": "BaseBdev1", 00:26:50.475 "uuid": "8d14f099-722d-4b3c-abce-eab2002f8adc", 00:26:50.475 "is_configured": true, 00:26:50.475 "data_offset": 0, 00:26:50.475 "data_size": 65536 00:26:50.475 }, 00:26:50.475 { 00:26:50.475 "name": null, 00:26:50.475 "uuid": "fad0ab4d-4487-4080-8341-4d17f9bc0fae", 00:26:50.475 "is_configured": false, 00:26:50.475 "data_offset": 0, 00:26:50.475 "data_size": 65536 00:26:50.475 }, 00:26:50.475 { 00:26:50.475 "name": null, 00:26:50.475 "uuid": "92a5097f-0ac6-4dce-af07-e2fe91b2b309", 00:26:50.475 "is_configured": false, 00:26:50.475 "data_offset": 0, 00:26:50.475 "data_size": 65536 00:26:50.475 } 00:26:50.475 ] 00:26:50.475 }' 00:26:50.475 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:50.475 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:50.731 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:50.731 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:50.731 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.731 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:50.731 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.731 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:26:50.731 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:26:50.731 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.731 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:50.731 [2024-11-05 15:56:22.957309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:50.731 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.732 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:50.732 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:50.732 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:50.732 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:50.732 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:50.732 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:50.732 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:50.732 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:50.732 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:50.732 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:50.732 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:50.732 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.732 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:50.732 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:50.732 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.732 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:50.732 "name": "Existed_Raid", 00:26:50.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:50.732 "strip_size_kb": 0, 00:26:50.732 "state": "configuring", 00:26:50.732 "raid_level": "raid1", 00:26:50.732 "superblock": false, 00:26:50.732 "num_base_bdevs": 3, 00:26:50.732 "num_base_bdevs_discovered": 2, 00:26:50.732 "num_base_bdevs_operational": 3, 00:26:50.732 "base_bdevs_list": [ 00:26:50.732 { 00:26:50.732 "name": "BaseBdev1", 00:26:50.732 "uuid": "8d14f099-722d-4b3c-abce-eab2002f8adc", 00:26:50.732 "is_configured": true, 00:26:50.732 "data_offset": 0, 00:26:50.732 "data_size": 65536 00:26:50.732 }, 00:26:50.732 { 00:26:50.732 "name": null, 00:26:50.732 "uuid": "fad0ab4d-4487-4080-8341-4d17f9bc0fae", 00:26:50.732 "is_configured": false, 00:26:50.732 "data_offset": 0, 00:26:50.732 "data_size": 65536 00:26:50.732 }, 00:26:50.732 { 00:26:50.732 "name": "BaseBdev3", 00:26:50.732 "uuid": "92a5097f-0ac6-4dce-af07-e2fe91b2b309", 00:26:50.732 "is_configured": true, 00:26:50.732 "data_offset": 0, 00:26:50.732 "data_size": 65536 00:26:50.732 } 00:26:50.732 ] 00:26:50.732 }' 00:26:50.732 15:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:50.732 15:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:50.989 15:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:50.989 15:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.989 15:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:50.989 15:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:50.989 15:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.989 15:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:26:50.989 15:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:50.989 15:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.989 15:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:50.989 [2024-11-05 15:56:23.325402] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:50.989 15:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.989 15:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:50.989 15:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:50.989 15:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:50.989 15:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:50.989 15:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:50.989 15:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:50.990 15:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:50.990 15:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:50.990 15:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:50.990 15:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:50.990 15:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:50.990 15:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.990 15:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:50.990 15:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:50.990 15:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.247 15:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:51.247 "name": "Existed_Raid", 00:26:51.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:51.247 "strip_size_kb": 0, 00:26:51.247 "state": "configuring", 00:26:51.247 "raid_level": "raid1", 00:26:51.247 "superblock": false, 00:26:51.247 "num_base_bdevs": 3, 00:26:51.247 "num_base_bdevs_discovered": 1, 00:26:51.247 "num_base_bdevs_operational": 3, 00:26:51.247 "base_bdevs_list": [ 00:26:51.247 { 00:26:51.247 "name": null, 00:26:51.247 "uuid": "8d14f099-722d-4b3c-abce-eab2002f8adc", 00:26:51.247 "is_configured": false, 00:26:51.247 "data_offset": 0, 00:26:51.247 "data_size": 65536 00:26:51.247 }, 00:26:51.247 { 00:26:51.247 "name": null, 00:26:51.247 "uuid": "fad0ab4d-4487-4080-8341-4d17f9bc0fae", 00:26:51.247 "is_configured": false, 00:26:51.247 "data_offset": 0, 00:26:51.247 "data_size": 65536 00:26:51.247 }, 00:26:51.247 { 00:26:51.247 "name": "BaseBdev3", 00:26:51.247 "uuid": "92a5097f-0ac6-4dce-af07-e2fe91b2b309", 00:26:51.247 "is_configured": true, 00:26:51.247 "data_offset": 0, 00:26:51.247 "data_size": 65536 00:26:51.247 } 00:26:51.247 ] 00:26:51.247 }' 00:26:51.247 15:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:51.247 15:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:51.506 15:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:51.506 15:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.506 15:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:51.506 15:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:51.506 15:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.506 15:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:26:51.506 15:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:51.506 15:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.506 15:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:51.506 [2024-11-05 15:56:23.743860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:51.506 15:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.506 15:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:51.506 15:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:51.506 15:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:51.506 15:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:51.506 15:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:51.506 15:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:51.506 15:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:51.506 15:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:51.506 15:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:51.506 15:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:51.506 15:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:51.506 15:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:51.506 15:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.506 15:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:51.506 15:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.506 15:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:51.506 "name": "Existed_Raid", 00:26:51.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:51.506 "strip_size_kb": 0, 00:26:51.506 "state": "configuring", 00:26:51.506 "raid_level": "raid1", 00:26:51.506 "superblock": false, 00:26:51.506 "num_base_bdevs": 3, 00:26:51.506 "num_base_bdevs_discovered": 2, 00:26:51.506 "num_base_bdevs_operational": 3, 00:26:51.506 "base_bdevs_list": [ 00:26:51.506 { 00:26:51.506 "name": null, 00:26:51.506 "uuid": "8d14f099-722d-4b3c-abce-eab2002f8adc", 00:26:51.506 "is_configured": false, 00:26:51.506 "data_offset": 0, 00:26:51.506 "data_size": 65536 00:26:51.506 }, 00:26:51.506 { 00:26:51.506 "name": "BaseBdev2", 00:26:51.506 "uuid": "fad0ab4d-4487-4080-8341-4d17f9bc0fae", 00:26:51.506 "is_configured": true, 00:26:51.506 "data_offset": 0, 00:26:51.506 "data_size": 65536 00:26:51.506 }, 00:26:51.506 { 00:26:51.506 "name": "BaseBdev3", 00:26:51.506 "uuid": "92a5097f-0ac6-4dce-af07-e2fe91b2b309", 00:26:51.506 "is_configured": true, 00:26:51.506 "data_offset": 0, 00:26:51.506 "data_size": 65536 00:26:51.506 } 00:26:51.506 ] 00:26:51.506 }' 00:26:51.506 15:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:51.506 15:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:51.804 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:51.804 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8d14f099-722d-4b3c-abce-eab2002f8adc 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:51.805 [2024-11-05 15:56:24.169630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:51.805 [2024-11-05 15:56:24.169661] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:26:51.805 [2024-11-05 15:56:24.169667] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:26:51.805 [2024-11-05 15:56:24.169886] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:26:51.805 [2024-11-05 15:56:24.170000] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:26:51.805 [2024-11-05 15:56:24.170008] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:26:51.805 [2024-11-05 15:56:24.170164] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:51.805 NewBaseBdev 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:51.805 [ 00:26:51.805 { 00:26:51.805 "name": "NewBaseBdev", 00:26:51.805 "aliases": [ 00:26:51.805 "8d14f099-722d-4b3c-abce-eab2002f8adc" 00:26:51.805 ], 00:26:51.805 "product_name": "Malloc disk", 00:26:51.805 "block_size": 512, 00:26:51.805 "num_blocks": 65536, 00:26:51.805 "uuid": "8d14f099-722d-4b3c-abce-eab2002f8adc", 00:26:51.805 "assigned_rate_limits": { 00:26:51.805 "rw_ios_per_sec": 0, 00:26:51.805 "rw_mbytes_per_sec": 0, 00:26:51.805 "r_mbytes_per_sec": 0, 00:26:51.805 "w_mbytes_per_sec": 0 00:26:51.805 }, 00:26:51.805 "claimed": true, 00:26:51.805 "claim_type": "exclusive_write", 00:26:51.805 "zoned": false, 00:26:51.805 "supported_io_types": { 00:26:51.805 "read": true, 00:26:51.805 "write": true, 00:26:51.805 "unmap": true, 00:26:51.805 "flush": true, 00:26:51.805 "reset": true, 00:26:51.805 "nvme_admin": false, 00:26:51.805 "nvme_io": false, 00:26:51.805 "nvme_io_md": false, 00:26:51.805 "write_zeroes": true, 00:26:51.805 "zcopy": true, 00:26:51.805 "get_zone_info": false, 00:26:51.805 "zone_management": false, 00:26:51.805 "zone_append": false, 00:26:51.805 "compare": false, 00:26:51.805 "compare_and_write": false, 00:26:51.805 "abort": true, 00:26:51.805 "seek_hole": false, 00:26:51.805 "seek_data": false, 00:26:51.805 "copy": true, 00:26:51.805 "nvme_iov_md": false 00:26:51.805 }, 00:26:51.805 "memory_domains": [ 00:26:51.805 { 00:26:51.805 "dma_device_id": "system", 00:26:51.805 "dma_device_type": 1 00:26:51.805 }, 00:26:51.805 { 00:26:51.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:51.805 "dma_device_type": 2 00:26:51.805 } 00:26:51.805 ], 00:26:51.805 "driver_specific": {} 00:26:51.805 } 00:26:51.805 ] 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:51.805 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.064 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:52.064 "name": "Existed_Raid", 00:26:52.064 "uuid": "e3bf7ad8-b4db-4c76-945b-8c0fe9f17fe2", 00:26:52.064 "strip_size_kb": 0, 00:26:52.064 "state": "online", 00:26:52.064 "raid_level": "raid1", 00:26:52.064 "superblock": false, 00:26:52.064 "num_base_bdevs": 3, 00:26:52.064 "num_base_bdevs_discovered": 3, 00:26:52.064 "num_base_bdevs_operational": 3, 00:26:52.064 "base_bdevs_list": [ 00:26:52.064 { 00:26:52.064 "name": "NewBaseBdev", 00:26:52.064 "uuid": "8d14f099-722d-4b3c-abce-eab2002f8adc", 00:26:52.064 "is_configured": true, 00:26:52.064 "data_offset": 0, 00:26:52.064 "data_size": 65536 00:26:52.064 }, 00:26:52.064 { 00:26:52.064 "name": "BaseBdev2", 00:26:52.064 "uuid": "fad0ab4d-4487-4080-8341-4d17f9bc0fae", 00:26:52.064 "is_configured": true, 00:26:52.064 "data_offset": 0, 00:26:52.064 "data_size": 65536 00:26:52.064 }, 00:26:52.064 { 00:26:52.064 "name": "BaseBdev3", 00:26:52.064 "uuid": "92a5097f-0ac6-4dce-af07-e2fe91b2b309", 00:26:52.064 "is_configured": true, 00:26:52.064 "data_offset": 0, 00:26:52.064 "data_size": 65536 00:26:52.064 } 00:26:52.064 ] 00:26:52.064 }' 00:26:52.064 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:52.064 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:52.322 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:26:52.322 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:52.322 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:52.322 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:52.322 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:52.322 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:52.322 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:52.322 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.322 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:52.322 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:52.322 [2024-11-05 15:56:24.502002] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:52.322 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.322 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:52.322 "name": "Existed_Raid", 00:26:52.322 "aliases": [ 00:26:52.322 "e3bf7ad8-b4db-4c76-945b-8c0fe9f17fe2" 00:26:52.322 ], 00:26:52.322 "product_name": "Raid Volume", 00:26:52.322 "block_size": 512, 00:26:52.322 "num_blocks": 65536, 00:26:52.322 "uuid": "e3bf7ad8-b4db-4c76-945b-8c0fe9f17fe2", 00:26:52.322 "assigned_rate_limits": { 00:26:52.322 "rw_ios_per_sec": 0, 00:26:52.322 "rw_mbytes_per_sec": 0, 00:26:52.322 "r_mbytes_per_sec": 0, 00:26:52.322 "w_mbytes_per_sec": 0 00:26:52.322 }, 00:26:52.322 "claimed": false, 00:26:52.322 "zoned": false, 00:26:52.322 "supported_io_types": { 00:26:52.322 "read": true, 00:26:52.322 "write": true, 00:26:52.322 "unmap": false, 00:26:52.322 "flush": false, 00:26:52.322 "reset": true, 00:26:52.322 "nvme_admin": false, 00:26:52.322 "nvme_io": false, 00:26:52.322 "nvme_io_md": false, 00:26:52.322 "write_zeroes": true, 00:26:52.322 "zcopy": false, 00:26:52.322 "get_zone_info": false, 00:26:52.322 "zone_management": false, 00:26:52.322 "zone_append": false, 00:26:52.322 "compare": false, 00:26:52.322 "compare_and_write": false, 00:26:52.322 "abort": false, 00:26:52.322 "seek_hole": false, 00:26:52.322 "seek_data": false, 00:26:52.322 "copy": false, 00:26:52.322 "nvme_iov_md": false 00:26:52.322 }, 00:26:52.322 "memory_domains": [ 00:26:52.322 { 00:26:52.322 "dma_device_id": "system", 00:26:52.322 "dma_device_type": 1 00:26:52.322 }, 00:26:52.322 { 00:26:52.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:52.322 "dma_device_type": 2 00:26:52.322 }, 00:26:52.322 { 00:26:52.322 "dma_device_id": "system", 00:26:52.322 "dma_device_type": 1 00:26:52.322 }, 00:26:52.322 { 00:26:52.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:52.322 "dma_device_type": 2 00:26:52.322 }, 00:26:52.322 { 00:26:52.322 "dma_device_id": "system", 00:26:52.322 "dma_device_type": 1 00:26:52.322 }, 00:26:52.322 { 00:26:52.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:52.323 "dma_device_type": 2 00:26:52.323 } 00:26:52.323 ], 00:26:52.323 "driver_specific": { 00:26:52.323 "raid": { 00:26:52.323 "uuid": "e3bf7ad8-b4db-4c76-945b-8c0fe9f17fe2", 00:26:52.323 "strip_size_kb": 0, 00:26:52.323 "state": "online", 00:26:52.323 "raid_level": "raid1", 00:26:52.323 "superblock": false, 00:26:52.323 "num_base_bdevs": 3, 00:26:52.323 "num_base_bdevs_discovered": 3, 00:26:52.323 "num_base_bdevs_operational": 3, 00:26:52.323 "base_bdevs_list": [ 00:26:52.323 { 00:26:52.323 "name": "NewBaseBdev", 00:26:52.323 "uuid": "8d14f099-722d-4b3c-abce-eab2002f8adc", 00:26:52.323 "is_configured": true, 00:26:52.323 "data_offset": 0, 00:26:52.323 "data_size": 65536 00:26:52.323 }, 00:26:52.323 { 00:26:52.323 "name": "BaseBdev2", 00:26:52.323 "uuid": "fad0ab4d-4487-4080-8341-4d17f9bc0fae", 00:26:52.323 "is_configured": true, 00:26:52.323 "data_offset": 0, 00:26:52.323 "data_size": 65536 00:26:52.323 }, 00:26:52.323 { 00:26:52.323 "name": "BaseBdev3", 00:26:52.323 "uuid": "92a5097f-0ac6-4dce-af07-e2fe91b2b309", 00:26:52.323 "is_configured": true, 00:26:52.323 "data_offset": 0, 00:26:52.323 "data_size": 65536 00:26:52.323 } 00:26:52.323 ] 00:26:52.323 } 00:26:52.323 } 00:26:52.323 }' 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:26:52.323 BaseBdev2 00:26:52.323 BaseBdev3' 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:52.323 [2024-11-05 15:56:24.685752] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:52.323 [2024-11-05 15:56:24.685853] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:52.323 [2024-11-05 15:56:24.685910] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:52.323 [2024-11-05 15:56:24.686132] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:52.323 [2024-11-05 15:56:24.686141] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65598 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 65598 ']' 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 65598 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65598 00:26:52.323 killing process with pid 65598 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65598' 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 65598 00:26:52.323 [2024-11-05 15:56:24.718421] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:52.323 15:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 65598 00:26:52.581 [2024-11-05 15:56:24.860031] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:53.146 15:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:26:53.146 00:26:53.146 real 0m7.260s 00:26:53.146 user 0m11.777s 00:26:53.146 sys 0m1.140s 00:26:53.146 ************************************ 00:26:53.146 END TEST raid_state_function_test 00:26:53.146 ************************************ 00:26:53.146 15:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:53.146 15:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:53.146 15:56:25 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:26:53.146 15:56:25 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:26:53.146 15:56:25 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:53.146 15:56:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:53.146 ************************************ 00:26:53.146 START TEST raid_state_function_test_sb 00:26:53.146 ************************************ 00:26:53.146 15:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 true 00:26:53.146 15:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:26:53.146 15:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:26:53.146 15:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:26:53.146 15:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:26:53.146 15:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:26:53.146 15:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:53.146 15:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:26:53.146 15:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:53.146 15:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:53.146 15:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:26:53.146 15:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:53.146 15:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:53.146 15:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:26:53.146 15:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:53.146 15:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:53.147 15:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:26:53.147 15:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:26:53.147 15:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:26:53.147 15:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:26:53.147 15:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:26:53.147 15:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:26:53.147 15:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:26:53.147 15:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:26:53.147 15:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:26:53.147 15:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:26:53.147 15:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66192 00:26:53.147 15:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66192' 00:26:53.147 Process raid pid: 66192 00:26:53.147 15:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66192 00:26:53.147 15:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 66192 ']' 00:26:53.147 15:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.147 15:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:53.147 15:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:53.147 15:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:26:53.147 15:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:53.147 15:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:53.147 [2024-11-05 15:56:25.526540] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:26:53.147 [2024-11-05 15:56:25.526760] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:53.404 [2024-11-05 15:56:25.679716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.404 [2024-11-05 15:56:25.760906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.663 [2024-11-05 15:56:25.869425] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:53.663 [2024-11-05 15:56:25.869450] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:53.921 15:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:53.921 15:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:26:53.921 15:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:53.921 15:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.921 15:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:53.921 [2024-11-05 15:56:26.326179] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:53.921 [2024-11-05 15:56:26.326221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:53.921 [2024-11-05 15:56:26.326229] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:53.921 [2024-11-05 15:56:26.326237] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:53.921 [2024-11-05 15:56:26.326242] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:53.921 [2024-11-05 15:56:26.326249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:53.921 15:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.921 15:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:53.921 15:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:53.921 15:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:53.921 15:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:53.921 15:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:53.921 15:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:53.921 15:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:53.921 15:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:53.921 15:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:53.921 15:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:53.921 15:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:53.921 15:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:53.921 15:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.921 15:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.179 15:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.179 15:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:54.179 "name": "Existed_Raid", 00:26:54.179 "uuid": "294b72c8-ac95-4cf2-af5d-3fa2d6238032", 00:26:54.179 "strip_size_kb": 0, 00:26:54.179 "state": "configuring", 00:26:54.179 "raid_level": "raid1", 00:26:54.179 "superblock": true, 00:26:54.179 "num_base_bdevs": 3, 00:26:54.179 "num_base_bdevs_discovered": 0, 00:26:54.179 "num_base_bdevs_operational": 3, 00:26:54.179 "base_bdevs_list": [ 00:26:54.179 { 00:26:54.179 "name": "BaseBdev1", 00:26:54.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:54.179 "is_configured": false, 00:26:54.179 "data_offset": 0, 00:26:54.179 "data_size": 0 00:26:54.179 }, 00:26:54.179 { 00:26:54.179 "name": "BaseBdev2", 00:26:54.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:54.179 "is_configured": false, 00:26:54.179 "data_offset": 0, 00:26:54.179 "data_size": 0 00:26:54.179 }, 00:26:54.179 { 00:26:54.179 "name": "BaseBdev3", 00:26:54.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:54.179 "is_configured": false, 00:26:54.179 "data_offset": 0, 00:26:54.179 "data_size": 0 00:26:54.179 } 00:26:54.179 ] 00:26:54.179 }' 00:26:54.179 15:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:54.179 15:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.437 15:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:54.437 15:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.437 15:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.437 [2024-11-05 15:56:26.650202] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:54.437 [2024-11-05 15:56:26.650323] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:26:54.437 15:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.437 15:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:54.437 15:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.437 15:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.437 [2024-11-05 15:56:26.658200] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:54.437 [2024-11-05 15:56:26.658232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:54.437 [2024-11-05 15:56:26.658239] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:54.437 [2024-11-05 15:56:26.658246] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:54.437 [2024-11-05 15:56:26.658251] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:54.437 [2024-11-05 15:56:26.658258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:54.437 15:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.437 15:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:54.437 15:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.437 15:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.437 [2024-11-05 15:56:26.685935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:54.437 BaseBdev1 00:26:54.437 15:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.437 15:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:26:54.437 15:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:26:54.437 15:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:54.437 15:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:26:54.437 15:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:54.437 15:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:54.437 15:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:54.438 15:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.438 15:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.438 15:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.438 15:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:54.438 15:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.438 15:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.438 [ 00:26:54.438 { 00:26:54.438 "name": "BaseBdev1", 00:26:54.438 "aliases": [ 00:26:54.438 "fbe53c14-e8a2-4d22-b5af-b4037faf58e4" 00:26:54.438 ], 00:26:54.438 "product_name": "Malloc disk", 00:26:54.438 "block_size": 512, 00:26:54.438 "num_blocks": 65536, 00:26:54.438 "uuid": "fbe53c14-e8a2-4d22-b5af-b4037faf58e4", 00:26:54.438 "assigned_rate_limits": { 00:26:54.438 "rw_ios_per_sec": 0, 00:26:54.438 "rw_mbytes_per_sec": 0, 00:26:54.438 "r_mbytes_per_sec": 0, 00:26:54.438 "w_mbytes_per_sec": 0 00:26:54.438 }, 00:26:54.438 "claimed": true, 00:26:54.438 "claim_type": "exclusive_write", 00:26:54.438 "zoned": false, 00:26:54.438 "supported_io_types": { 00:26:54.438 "read": true, 00:26:54.438 "write": true, 00:26:54.438 "unmap": true, 00:26:54.438 "flush": true, 00:26:54.438 "reset": true, 00:26:54.438 "nvme_admin": false, 00:26:54.438 "nvme_io": false, 00:26:54.438 "nvme_io_md": false, 00:26:54.438 "write_zeroes": true, 00:26:54.438 "zcopy": true, 00:26:54.438 "get_zone_info": false, 00:26:54.438 "zone_management": false, 00:26:54.438 "zone_append": false, 00:26:54.438 "compare": false, 00:26:54.438 "compare_and_write": false, 00:26:54.438 "abort": true, 00:26:54.438 "seek_hole": false, 00:26:54.438 "seek_data": false, 00:26:54.438 "copy": true, 00:26:54.438 "nvme_iov_md": false 00:26:54.438 }, 00:26:54.438 "memory_domains": [ 00:26:54.438 { 00:26:54.438 "dma_device_id": "system", 00:26:54.438 "dma_device_type": 1 00:26:54.438 }, 00:26:54.438 { 00:26:54.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:54.438 "dma_device_type": 2 00:26:54.438 } 00:26:54.438 ], 00:26:54.438 "driver_specific": {} 00:26:54.438 } 00:26:54.438 ] 00:26:54.438 15:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.438 15:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:26:54.438 15:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:54.438 15:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:54.438 15:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:54.438 15:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:54.438 15:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:54.438 15:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:54.438 15:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:54.438 15:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:54.438 15:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:54.438 15:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:54.438 15:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:54.438 15:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:54.438 15:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.438 15:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.438 15:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.438 15:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:54.438 "name": "Existed_Raid", 00:26:54.438 "uuid": "5da4a23e-08fc-4fdc-bd7d-6fe4a3f74315", 00:26:54.438 "strip_size_kb": 0, 00:26:54.438 "state": "configuring", 00:26:54.438 "raid_level": "raid1", 00:26:54.438 "superblock": true, 00:26:54.438 "num_base_bdevs": 3, 00:26:54.438 "num_base_bdevs_discovered": 1, 00:26:54.438 "num_base_bdevs_operational": 3, 00:26:54.438 "base_bdevs_list": [ 00:26:54.438 { 00:26:54.438 "name": "BaseBdev1", 00:26:54.438 "uuid": "fbe53c14-e8a2-4d22-b5af-b4037faf58e4", 00:26:54.438 "is_configured": true, 00:26:54.438 "data_offset": 2048, 00:26:54.438 "data_size": 63488 00:26:54.438 }, 00:26:54.438 { 00:26:54.438 "name": "BaseBdev2", 00:26:54.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:54.438 "is_configured": false, 00:26:54.438 "data_offset": 0, 00:26:54.438 "data_size": 0 00:26:54.438 }, 00:26:54.438 { 00:26:54.438 "name": "BaseBdev3", 00:26:54.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:54.438 "is_configured": false, 00:26:54.438 "data_offset": 0, 00:26:54.438 "data_size": 0 00:26:54.438 } 00:26:54.438 ] 00:26:54.438 }' 00:26:54.438 15:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:54.438 15:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.696 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:54.696 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.696 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.696 [2024-11-05 15:56:27.050056] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:54.696 [2024-11-05 15:56:27.050093] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:26:54.696 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.696 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:54.696 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.696 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.696 [2024-11-05 15:56:27.058085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:54.696 [2024-11-05 15:56:27.059687] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:54.696 [2024-11-05 15:56:27.059796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:54.696 [2024-11-05 15:56:27.059858] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:54.696 [2024-11-05 15:56:27.059882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:54.696 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.696 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:26:54.696 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:54.696 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:54.696 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:54.696 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:54.696 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:54.696 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:54.696 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:54.696 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:54.696 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:54.696 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:54.696 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:54.696 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:54.696 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:54.696 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.696 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.696 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.696 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:54.696 "name": "Existed_Raid", 00:26:54.696 "uuid": "23fb2d44-80e0-4245-bb12-040f4f566cb4", 00:26:54.696 "strip_size_kb": 0, 00:26:54.696 "state": "configuring", 00:26:54.696 "raid_level": "raid1", 00:26:54.696 "superblock": true, 00:26:54.696 "num_base_bdevs": 3, 00:26:54.696 "num_base_bdevs_discovered": 1, 00:26:54.696 "num_base_bdevs_operational": 3, 00:26:54.696 "base_bdevs_list": [ 00:26:54.696 { 00:26:54.696 "name": "BaseBdev1", 00:26:54.696 "uuid": "fbe53c14-e8a2-4d22-b5af-b4037faf58e4", 00:26:54.696 "is_configured": true, 00:26:54.696 "data_offset": 2048, 00:26:54.696 "data_size": 63488 00:26:54.696 }, 00:26:54.696 { 00:26:54.696 "name": "BaseBdev2", 00:26:54.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:54.696 "is_configured": false, 00:26:54.696 "data_offset": 0, 00:26:54.696 "data_size": 0 00:26:54.696 }, 00:26:54.696 { 00:26:54.696 "name": "BaseBdev3", 00:26:54.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:54.696 "is_configured": false, 00:26:54.696 "data_offset": 0, 00:26:54.696 "data_size": 0 00:26:54.696 } 00:26:54.696 ] 00:26:54.696 }' 00:26:54.696 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:54.696 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.262 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:55.262 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.262 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.262 [2024-11-05 15:56:27.424418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:55.262 BaseBdev2 00:26:55.262 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.262 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:26:55.262 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:26:55.262 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:55.262 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:26:55.262 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:55.262 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:55.262 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:55.262 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.262 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.262 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.263 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:55.263 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.263 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.263 [ 00:26:55.263 { 00:26:55.263 "name": "BaseBdev2", 00:26:55.263 "aliases": [ 00:26:55.263 "661e8fa1-b36b-4a18-8a27-99a0ad8eff99" 00:26:55.263 ], 00:26:55.263 "product_name": "Malloc disk", 00:26:55.263 "block_size": 512, 00:26:55.263 "num_blocks": 65536, 00:26:55.263 "uuid": "661e8fa1-b36b-4a18-8a27-99a0ad8eff99", 00:26:55.263 "assigned_rate_limits": { 00:26:55.263 "rw_ios_per_sec": 0, 00:26:55.263 "rw_mbytes_per_sec": 0, 00:26:55.263 "r_mbytes_per_sec": 0, 00:26:55.263 "w_mbytes_per_sec": 0 00:26:55.263 }, 00:26:55.263 "claimed": true, 00:26:55.263 "claim_type": "exclusive_write", 00:26:55.263 "zoned": false, 00:26:55.263 "supported_io_types": { 00:26:55.263 "read": true, 00:26:55.263 "write": true, 00:26:55.263 "unmap": true, 00:26:55.263 "flush": true, 00:26:55.263 "reset": true, 00:26:55.263 "nvme_admin": false, 00:26:55.263 "nvme_io": false, 00:26:55.263 "nvme_io_md": false, 00:26:55.263 "write_zeroes": true, 00:26:55.263 "zcopy": true, 00:26:55.263 "get_zone_info": false, 00:26:55.263 "zone_management": false, 00:26:55.263 "zone_append": false, 00:26:55.263 "compare": false, 00:26:55.263 "compare_and_write": false, 00:26:55.263 "abort": true, 00:26:55.263 "seek_hole": false, 00:26:55.263 "seek_data": false, 00:26:55.263 "copy": true, 00:26:55.263 "nvme_iov_md": false 00:26:55.263 }, 00:26:55.263 "memory_domains": [ 00:26:55.263 { 00:26:55.263 "dma_device_id": "system", 00:26:55.263 "dma_device_type": 1 00:26:55.263 }, 00:26:55.263 { 00:26:55.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:55.263 "dma_device_type": 2 00:26:55.263 } 00:26:55.263 ], 00:26:55.263 "driver_specific": {} 00:26:55.263 } 00:26:55.263 ] 00:26:55.263 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.263 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:26:55.263 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:55.263 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:55.263 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:55.263 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:55.263 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:55.263 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:55.263 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:55.263 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:55.263 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:55.263 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:55.263 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:55.263 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:55.263 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:55.263 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:55.263 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.263 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.263 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.263 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:55.263 "name": "Existed_Raid", 00:26:55.263 "uuid": "23fb2d44-80e0-4245-bb12-040f4f566cb4", 00:26:55.263 "strip_size_kb": 0, 00:26:55.263 "state": "configuring", 00:26:55.263 "raid_level": "raid1", 00:26:55.263 "superblock": true, 00:26:55.263 "num_base_bdevs": 3, 00:26:55.263 "num_base_bdevs_discovered": 2, 00:26:55.263 "num_base_bdevs_operational": 3, 00:26:55.263 "base_bdevs_list": [ 00:26:55.263 { 00:26:55.263 "name": "BaseBdev1", 00:26:55.263 "uuid": "fbe53c14-e8a2-4d22-b5af-b4037faf58e4", 00:26:55.263 "is_configured": true, 00:26:55.263 "data_offset": 2048, 00:26:55.263 "data_size": 63488 00:26:55.263 }, 00:26:55.263 { 00:26:55.263 "name": "BaseBdev2", 00:26:55.263 "uuid": "661e8fa1-b36b-4a18-8a27-99a0ad8eff99", 00:26:55.263 "is_configured": true, 00:26:55.263 "data_offset": 2048, 00:26:55.263 "data_size": 63488 00:26:55.263 }, 00:26:55.263 { 00:26:55.263 "name": "BaseBdev3", 00:26:55.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:55.263 "is_configured": false, 00:26:55.263 "data_offset": 0, 00:26:55.263 "data_size": 0 00:26:55.263 } 00:26:55.263 ] 00:26:55.263 }' 00:26:55.263 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:55.263 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.521 [2024-11-05 15:56:27.809885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:55.521 [2024-11-05 15:56:27.810202] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:55.521 [2024-11-05 15:56:27.810241] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:55.521 [2024-11-05 15:56:27.810525] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:26:55.521 [2024-11-05 15:56:27.810707] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:55.521 [2024-11-05 15:56:27.810769] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raBaseBdev3 00:26:55.521 id_bdev 0x617000007e80 00:26:55.521 [2024-11-05 15:56:27.810945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.521 [ 00:26:55.521 { 00:26:55.521 "name": "BaseBdev3", 00:26:55.521 "aliases": [ 00:26:55.521 "33763b7f-ba25-4577-a29a-206ef99626c0" 00:26:55.521 ], 00:26:55.521 "product_name": "Malloc disk", 00:26:55.521 "block_size": 512, 00:26:55.521 "num_blocks": 65536, 00:26:55.521 "uuid": "33763b7f-ba25-4577-a29a-206ef99626c0", 00:26:55.521 "assigned_rate_limits": { 00:26:55.521 "rw_ios_per_sec": 0, 00:26:55.521 "rw_mbytes_per_sec": 0, 00:26:55.521 "r_mbytes_per_sec": 0, 00:26:55.521 "w_mbytes_per_sec": 0 00:26:55.521 }, 00:26:55.521 "claimed": true, 00:26:55.521 "claim_type": "exclusive_write", 00:26:55.521 "zoned": false, 00:26:55.521 "supported_io_types": { 00:26:55.521 "read": true, 00:26:55.521 "write": true, 00:26:55.521 "unmap": true, 00:26:55.521 "flush": true, 00:26:55.521 "reset": true, 00:26:55.521 "nvme_admin": false, 00:26:55.521 "nvme_io": false, 00:26:55.521 "nvme_io_md": false, 00:26:55.521 "write_zeroes": true, 00:26:55.521 "zcopy": true, 00:26:55.521 "get_zone_info": false, 00:26:55.521 "zone_management": false, 00:26:55.521 "zone_append": false, 00:26:55.521 "compare": false, 00:26:55.521 "compare_and_write": false, 00:26:55.521 "abort": true, 00:26:55.521 "seek_hole": false, 00:26:55.521 "seek_data": false, 00:26:55.521 "copy": true, 00:26:55.521 "nvme_iov_md": false 00:26:55.521 }, 00:26:55.521 "memory_domains": [ 00:26:55.521 { 00:26:55.521 "dma_device_id": "system", 00:26:55.521 "dma_device_type": 1 00:26:55.521 }, 00:26:55.521 { 00:26:55.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:55.521 "dma_device_type": 2 00:26:55.521 } 00:26:55.521 ], 00:26:55.521 "driver_specific": {} 00:26:55.521 } 00:26:55.521 ] 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:55.521 "name": "Existed_Raid", 00:26:55.521 "uuid": "23fb2d44-80e0-4245-bb12-040f4f566cb4", 00:26:55.521 "strip_size_kb": 0, 00:26:55.521 "state": "online", 00:26:55.521 "raid_level": "raid1", 00:26:55.521 "superblock": true, 00:26:55.521 "num_base_bdevs": 3, 00:26:55.521 "num_base_bdevs_discovered": 3, 00:26:55.521 "num_base_bdevs_operational": 3, 00:26:55.521 "base_bdevs_list": [ 00:26:55.521 { 00:26:55.521 "name": "BaseBdev1", 00:26:55.521 "uuid": "fbe53c14-e8a2-4d22-b5af-b4037faf58e4", 00:26:55.521 "is_configured": true, 00:26:55.521 "data_offset": 2048, 00:26:55.521 "data_size": 63488 00:26:55.521 }, 00:26:55.521 { 00:26:55.521 "name": "BaseBdev2", 00:26:55.521 "uuid": "661e8fa1-b36b-4a18-8a27-99a0ad8eff99", 00:26:55.521 "is_configured": true, 00:26:55.521 "data_offset": 2048, 00:26:55.521 "data_size": 63488 00:26:55.521 }, 00:26:55.521 { 00:26:55.521 "name": "BaseBdev3", 00:26:55.521 "uuid": "33763b7f-ba25-4577-a29a-206ef99626c0", 00:26:55.521 "is_configured": true, 00:26:55.521 "data_offset": 2048, 00:26:55.521 "data_size": 63488 00:26:55.521 } 00:26:55.521 ] 00:26:55.521 }' 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:55.521 15:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.778 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:26:55.779 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:55.779 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:55.779 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:55.779 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:26:55.779 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:55.779 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:55.779 15:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.779 15:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.779 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:55.779 [2024-11-05 15:56:28.186266] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:56.036 15:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.036 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:56.036 "name": "Existed_Raid", 00:26:56.036 "aliases": [ 00:26:56.036 "23fb2d44-80e0-4245-bb12-040f4f566cb4" 00:26:56.036 ], 00:26:56.036 "product_name": "Raid Volume", 00:26:56.036 "block_size": 512, 00:26:56.036 "num_blocks": 63488, 00:26:56.036 "uuid": "23fb2d44-80e0-4245-bb12-040f4f566cb4", 00:26:56.036 "assigned_rate_limits": { 00:26:56.036 "rw_ios_per_sec": 0, 00:26:56.036 "rw_mbytes_per_sec": 0, 00:26:56.036 "r_mbytes_per_sec": 0, 00:26:56.036 "w_mbytes_per_sec": 0 00:26:56.036 }, 00:26:56.036 "claimed": false, 00:26:56.036 "zoned": false, 00:26:56.036 "supported_io_types": { 00:26:56.036 "read": true, 00:26:56.036 "write": true, 00:26:56.036 "unmap": false, 00:26:56.036 "flush": false, 00:26:56.036 "reset": true, 00:26:56.036 "nvme_admin": false, 00:26:56.036 "nvme_io": false, 00:26:56.036 "nvme_io_md": false, 00:26:56.036 "write_zeroes": true, 00:26:56.036 "zcopy": false, 00:26:56.036 "get_zone_info": false, 00:26:56.036 "zone_management": false, 00:26:56.036 "zone_append": false, 00:26:56.036 "compare": false, 00:26:56.036 "compare_and_write": false, 00:26:56.036 "abort": false, 00:26:56.036 "seek_hole": false, 00:26:56.036 "seek_data": false, 00:26:56.036 "copy": false, 00:26:56.036 "nvme_iov_md": false 00:26:56.036 }, 00:26:56.036 "memory_domains": [ 00:26:56.036 { 00:26:56.036 "dma_device_id": "system", 00:26:56.036 "dma_device_type": 1 00:26:56.036 }, 00:26:56.036 { 00:26:56.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:56.036 "dma_device_type": 2 00:26:56.036 }, 00:26:56.037 { 00:26:56.037 "dma_device_id": "system", 00:26:56.037 "dma_device_type": 1 00:26:56.037 }, 00:26:56.037 { 00:26:56.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:56.037 "dma_device_type": 2 00:26:56.037 }, 00:26:56.037 { 00:26:56.037 "dma_device_id": "system", 00:26:56.037 "dma_device_type": 1 00:26:56.037 }, 00:26:56.037 { 00:26:56.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:56.037 "dma_device_type": 2 00:26:56.037 } 00:26:56.037 ], 00:26:56.037 "driver_specific": { 00:26:56.037 "raid": { 00:26:56.037 "uuid": "23fb2d44-80e0-4245-bb12-040f4f566cb4", 00:26:56.037 "strip_size_kb": 0, 00:26:56.037 "state": "online", 00:26:56.037 "raid_level": "raid1", 00:26:56.037 "superblock": true, 00:26:56.037 "num_base_bdevs": 3, 00:26:56.037 "num_base_bdevs_discovered": 3, 00:26:56.037 "num_base_bdevs_operational": 3, 00:26:56.037 "base_bdevs_list": [ 00:26:56.037 { 00:26:56.037 "name": "BaseBdev1", 00:26:56.037 "uuid": "fbe53c14-e8a2-4d22-b5af-b4037faf58e4", 00:26:56.037 "is_configured": true, 00:26:56.037 "data_offset": 2048, 00:26:56.037 "data_size": 63488 00:26:56.037 }, 00:26:56.037 { 00:26:56.037 "name": "BaseBdev2", 00:26:56.037 "uuid": "661e8fa1-b36b-4a18-8a27-99a0ad8eff99", 00:26:56.037 "is_configured": true, 00:26:56.037 "data_offset": 2048, 00:26:56.037 "data_size": 63488 00:26:56.037 }, 00:26:56.037 { 00:26:56.037 "name": "BaseBdev3", 00:26:56.037 "uuid": "33763b7f-ba25-4577-a29a-206ef99626c0", 00:26:56.037 "is_configured": true, 00:26:56.037 "data_offset": 2048, 00:26:56.037 "data_size": 63488 00:26:56.037 } 00:26:56.037 ] 00:26:56.037 } 00:26:56.037 } 00:26:56.037 }' 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:26:56.037 BaseBdev2 00:26:56.037 BaseBdev3' 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:56.037 [2024-11-05 15:56:28.386052] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.037 15:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:56.295 15:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.295 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:56.295 "name": "Existed_Raid", 00:26:56.295 "uuid": "23fb2d44-80e0-4245-bb12-040f4f566cb4", 00:26:56.295 "strip_size_kb": 0, 00:26:56.295 "state": "online", 00:26:56.295 "raid_level": "raid1", 00:26:56.295 "superblock": true, 00:26:56.295 "num_base_bdevs": 3, 00:26:56.295 "num_base_bdevs_discovered": 2, 00:26:56.295 "num_base_bdevs_operational": 2, 00:26:56.295 "base_bdevs_list": [ 00:26:56.295 { 00:26:56.295 "name": null, 00:26:56.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:56.295 "is_configured": false, 00:26:56.295 "data_offset": 0, 00:26:56.295 "data_size": 63488 00:26:56.295 }, 00:26:56.295 { 00:26:56.295 "name": "BaseBdev2", 00:26:56.295 "uuid": "661e8fa1-b36b-4a18-8a27-99a0ad8eff99", 00:26:56.295 "is_configured": true, 00:26:56.295 "data_offset": 2048, 00:26:56.295 "data_size": 63488 00:26:56.295 }, 00:26:56.295 { 00:26:56.295 "name": "BaseBdev3", 00:26:56.295 "uuid": "33763b7f-ba25-4577-a29a-206ef99626c0", 00:26:56.295 "is_configured": true, 00:26:56.295 "data_offset": 2048, 00:26:56.295 "data_size": 63488 00:26:56.295 } 00:26:56.295 ] 00:26:56.295 }' 00:26:56.295 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:56.296 15:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:56.554 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:26:56.554 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:56.554 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:56.554 15:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.554 15:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:56.554 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:56.554 15:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.554 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:56.554 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:56.554 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:26:56.554 15:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.554 15:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:56.554 [2024-11-05 15:56:28.824085] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:56.554 15:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.554 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:56.554 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:56.554 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:56.554 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:56.554 15:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.554 15:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:56.554 15:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.554 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:56.554 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:56.554 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:26:56.554 15:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.554 15:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:56.554 [2024-11-05 15:56:28.914667] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:56.554 [2024-11-05 15:56:28.914745] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:56.554 [2024-11-05 15:56:28.960946] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:56.554 [2024-11-05 15:56:28.961084] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:56.554 [2024-11-05 15:56:28.961143] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:26:56.554 15:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.554 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:56.554 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:56.554 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:26:56.554 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:56.554 15:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.554 15:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:56.813 15:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.813 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:26:56.813 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:26:56.813 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:26:56.813 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:26:56.813 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:56.813 15:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:56.813 15:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.813 15:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:56.813 BaseBdev2 00:26:56.813 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.813 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:26:56.813 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:26:56.813 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:56.813 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:26:56.813 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:56.813 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:56.813 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:56.813 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.813 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:56.813 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:56.814 [ 00:26:56.814 { 00:26:56.814 "name": "BaseBdev2", 00:26:56.814 "aliases": [ 00:26:56.814 "d92deebe-c6d5-45c5-b81f-67413c3e6e3d" 00:26:56.814 ], 00:26:56.814 "product_name": "Malloc disk", 00:26:56.814 "block_size": 512, 00:26:56.814 "num_blocks": 65536, 00:26:56.814 "uuid": "d92deebe-c6d5-45c5-b81f-67413c3e6e3d", 00:26:56.814 "assigned_rate_limits": { 00:26:56.814 "rw_ios_per_sec": 0, 00:26:56.814 "rw_mbytes_per_sec": 0, 00:26:56.814 "r_mbytes_per_sec": 0, 00:26:56.814 "w_mbytes_per_sec": 0 00:26:56.814 }, 00:26:56.814 "claimed": false, 00:26:56.814 "zoned": false, 00:26:56.814 "supported_io_types": { 00:26:56.814 "read": true, 00:26:56.814 "write": true, 00:26:56.814 "unmap": true, 00:26:56.814 "flush": true, 00:26:56.814 "reset": true, 00:26:56.814 "nvme_admin": false, 00:26:56.814 "nvme_io": false, 00:26:56.814 "nvme_io_md": false, 00:26:56.814 "write_zeroes": true, 00:26:56.814 "zcopy": true, 00:26:56.814 "get_zone_info": false, 00:26:56.814 "zone_management": false, 00:26:56.814 "zone_append": false, 00:26:56.814 "compare": false, 00:26:56.814 "compare_and_write": false, 00:26:56.814 "abort": true, 00:26:56.814 "seek_hole": false, 00:26:56.814 "seek_data": false, 00:26:56.814 "copy": true, 00:26:56.814 "nvme_iov_md": false 00:26:56.814 }, 00:26:56.814 "memory_domains": [ 00:26:56.814 { 00:26:56.814 "dma_device_id": "system", 00:26:56.814 "dma_device_type": 1 00:26:56.814 }, 00:26:56.814 { 00:26:56.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:56.814 "dma_device_type": 2 00:26:56.814 } 00:26:56.814 ], 00:26:56.814 "driver_specific": {} 00:26:56.814 } 00:26:56.814 ] 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:56.814 BaseBdev3 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:56.814 [ 00:26:56.814 { 00:26:56.814 "name": "BaseBdev3", 00:26:56.814 "aliases": [ 00:26:56.814 "4d86b39b-e37a-497e-8fd5-d29d76ab8d14" 00:26:56.814 ], 00:26:56.814 "product_name": "Malloc disk", 00:26:56.814 "block_size": 512, 00:26:56.814 "num_blocks": 65536, 00:26:56.814 "uuid": "4d86b39b-e37a-497e-8fd5-d29d76ab8d14", 00:26:56.814 "assigned_rate_limits": { 00:26:56.814 "rw_ios_per_sec": 0, 00:26:56.814 "rw_mbytes_per_sec": 0, 00:26:56.814 "r_mbytes_per_sec": 0, 00:26:56.814 "w_mbytes_per_sec": 0 00:26:56.814 }, 00:26:56.814 "claimed": false, 00:26:56.814 "zoned": false, 00:26:56.814 "supported_io_types": { 00:26:56.814 "read": true, 00:26:56.814 "write": true, 00:26:56.814 "unmap": true, 00:26:56.814 "flush": true, 00:26:56.814 "reset": true, 00:26:56.814 "nvme_admin": false, 00:26:56.814 "nvme_io": false, 00:26:56.814 "nvme_io_md": false, 00:26:56.814 "write_zeroes": true, 00:26:56.814 "zcopy": true, 00:26:56.814 "get_zone_info": false, 00:26:56.814 "zone_management": false, 00:26:56.814 "zone_append": false, 00:26:56.814 "compare": false, 00:26:56.814 "compare_and_write": false, 00:26:56.814 "abort": true, 00:26:56.814 "seek_hole": false, 00:26:56.814 "seek_data": false, 00:26:56.814 "copy": true, 00:26:56.814 "nvme_iov_md": false 00:26:56.814 }, 00:26:56.814 "memory_domains": [ 00:26:56.814 { 00:26:56.814 "dma_device_id": "system", 00:26:56.814 "dma_device_type": 1 00:26:56.814 }, 00:26:56.814 { 00:26:56.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:56.814 "dma_device_type": 2 00:26:56.814 } 00:26:56.814 ], 00:26:56.814 "driver_specific": {} 00:26:56.814 } 00:26:56.814 ] 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:56.814 [2024-11-05 15:56:29.095376] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:56.814 [2024-11-05 15:56:29.095493] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:56.814 [2024-11-05 15:56:29.095549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:56.814 [2024-11-05 15:56:29.097064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:56.814 "name": "Existed_Raid", 00:26:56.814 "uuid": "4b9b69ef-5e8e-4e26-b56b-1d4b1b1fd204", 00:26:56.814 "strip_size_kb": 0, 00:26:56.814 "state": "configuring", 00:26:56.814 "raid_level": "raid1", 00:26:56.814 "superblock": true, 00:26:56.814 "num_base_bdevs": 3, 00:26:56.814 "num_base_bdevs_discovered": 2, 00:26:56.814 "num_base_bdevs_operational": 3, 00:26:56.814 "base_bdevs_list": [ 00:26:56.814 { 00:26:56.814 "name": "BaseBdev1", 00:26:56.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:56.814 "is_configured": false, 00:26:56.814 "data_offset": 0, 00:26:56.814 "data_size": 0 00:26:56.814 }, 00:26:56.814 { 00:26:56.814 "name": "BaseBdev2", 00:26:56.814 "uuid": "d92deebe-c6d5-45c5-b81f-67413c3e6e3d", 00:26:56.814 "is_configured": true, 00:26:56.814 "data_offset": 2048, 00:26:56.814 "data_size": 63488 00:26:56.814 }, 00:26:56.814 { 00:26:56.814 "name": "BaseBdev3", 00:26:56.814 "uuid": "4d86b39b-e37a-497e-8fd5-d29d76ab8d14", 00:26:56.814 "is_configured": true, 00:26:56.814 "data_offset": 2048, 00:26:56.814 "data_size": 63488 00:26:56.814 } 00:26:56.814 ] 00:26:56.814 }' 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:56.814 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.073 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:26:57.073 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.073 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.073 [2024-11-05 15:56:29.439443] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:57.073 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.073 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:57.073 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:57.073 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:57.073 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:57.073 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:57.073 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:57.073 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:57.073 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:57.073 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:57.073 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:57.073 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:57.073 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:57.073 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.073 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.073 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.073 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:57.073 "name": "Existed_Raid", 00:26:57.073 "uuid": "4b9b69ef-5e8e-4e26-b56b-1d4b1b1fd204", 00:26:57.073 "strip_size_kb": 0, 00:26:57.073 "state": "configuring", 00:26:57.073 "raid_level": "raid1", 00:26:57.073 "superblock": true, 00:26:57.073 "num_base_bdevs": 3, 00:26:57.073 "num_base_bdevs_discovered": 1, 00:26:57.073 "num_base_bdevs_operational": 3, 00:26:57.073 "base_bdevs_list": [ 00:26:57.073 { 00:26:57.073 "name": "BaseBdev1", 00:26:57.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:57.073 "is_configured": false, 00:26:57.073 "data_offset": 0, 00:26:57.073 "data_size": 0 00:26:57.073 }, 00:26:57.073 { 00:26:57.073 "name": null, 00:26:57.073 "uuid": "d92deebe-c6d5-45c5-b81f-67413c3e6e3d", 00:26:57.073 "is_configured": false, 00:26:57.073 "data_offset": 0, 00:26:57.073 "data_size": 63488 00:26:57.073 }, 00:26:57.073 { 00:26:57.073 "name": "BaseBdev3", 00:26:57.073 "uuid": "4d86b39b-e37a-497e-8fd5-d29d76ab8d14", 00:26:57.073 "is_configured": true, 00:26:57.073 "data_offset": 2048, 00:26:57.073 "data_size": 63488 00:26:57.073 } 00:26:57.073 ] 00:26:57.073 }' 00:26:57.073 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:57.073 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.640 [2024-11-05 15:56:29.833209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:57.640 BaseBdev1 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.640 [ 00:26:57.640 { 00:26:57.640 "name": "BaseBdev1", 00:26:57.640 "aliases": [ 00:26:57.640 "f2a9aed1-686a-437a-9015-1db069b19257" 00:26:57.640 ], 00:26:57.640 "product_name": "Malloc disk", 00:26:57.640 "block_size": 512, 00:26:57.640 "num_blocks": 65536, 00:26:57.640 "uuid": "f2a9aed1-686a-437a-9015-1db069b19257", 00:26:57.640 "assigned_rate_limits": { 00:26:57.640 "rw_ios_per_sec": 0, 00:26:57.640 "rw_mbytes_per_sec": 0, 00:26:57.640 "r_mbytes_per_sec": 0, 00:26:57.640 "w_mbytes_per_sec": 0 00:26:57.640 }, 00:26:57.640 "claimed": true, 00:26:57.640 "claim_type": "exclusive_write", 00:26:57.640 "zoned": false, 00:26:57.640 "supported_io_types": { 00:26:57.640 "read": true, 00:26:57.640 "write": true, 00:26:57.640 "unmap": true, 00:26:57.640 "flush": true, 00:26:57.640 "reset": true, 00:26:57.640 "nvme_admin": false, 00:26:57.640 "nvme_io": false, 00:26:57.640 "nvme_io_md": false, 00:26:57.640 "write_zeroes": true, 00:26:57.640 "zcopy": true, 00:26:57.640 "get_zone_info": false, 00:26:57.640 "zone_management": false, 00:26:57.640 "zone_append": false, 00:26:57.640 "compare": false, 00:26:57.640 "compare_and_write": false, 00:26:57.640 "abort": true, 00:26:57.640 "seek_hole": false, 00:26:57.640 "seek_data": false, 00:26:57.640 "copy": true, 00:26:57.640 "nvme_iov_md": false 00:26:57.640 }, 00:26:57.640 "memory_domains": [ 00:26:57.640 { 00:26:57.640 "dma_device_id": "system", 00:26:57.640 "dma_device_type": 1 00:26:57.640 }, 00:26:57.640 { 00:26:57.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:57.640 "dma_device_type": 2 00:26:57.640 } 00:26:57.640 ], 00:26:57.640 "driver_specific": {} 00:26:57.640 } 00:26:57.640 ] 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.640 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:57.640 "name": "Existed_Raid", 00:26:57.640 "uuid": "4b9b69ef-5e8e-4e26-b56b-1d4b1b1fd204", 00:26:57.640 "strip_size_kb": 0, 00:26:57.640 "state": "configuring", 00:26:57.640 "raid_level": "raid1", 00:26:57.640 "superblock": true, 00:26:57.640 "num_base_bdevs": 3, 00:26:57.640 "num_base_bdevs_discovered": 2, 00:26:57.640 "num_base_bdevs_operational": 3, 00:26:57.640 "base_bdevs_list": [ 00:26:57.640 { 00:26:57.640 "name": "BaseBdev1", 00:26:57.640 "uuid": "f2a9aed1-686a-437a-9015-1db069b19257", 00:26:57.640 "is_configured": true, 00:26:57.640 "data_offset": 2048, 00:26:57.640 "data_size": 63488 00:26:57.640 }, 00:26:57.640 { 00:26:57.640 "name": null, 00:26:57.640 "uuid": "d92deebe-c6d5-45c5-b81f-67413c3e6e3d", 00:26:57.640 "is_configured": false, 00:26:57.640 "data_offset": 0, 00:26:57.640 "data_size": 63488 00:26:57.640 }, 00:26:57.640 { 00:26:57.640 "name": "BaseBdev3", 00:26:57.640 "uuid": "4d86b39b-e37a-497e-8fd5-d29d76ab8d14", 00:26:57.640 "is_configured": true, 00:26:57.640 "data_offset": 2048, 00:26:57.640 "data_size": 63488 00:26:57.640 } 00:26:57.640 ] 00:26:57.640 }' 00:26:57.641 15:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:57.641 15:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.922 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:57.922 15:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.922 15:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.922 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:57.922 15:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.922 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:26:57.922 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:26:57.922 15:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.922 15:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.922 [2024-11-05 15:56:30.225330] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:57.922 15:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.922 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:57.922 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:57.922 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:57.922 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:57.922 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:57.922 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:57.922 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:57.922 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:57.922 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:57.922 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:57.922 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:57.922 15:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.922 15:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.922 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:57.922 15:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.922 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:57.922 "name": "Existed_Raid", 00:26:57.922 "uuid": "4b9b69ef-5e8e-4e26-b56b-1d4b1b1fd204", 00:26:57.922 "strip_size_kb": 0, 00:26:57.922 "state": "configuring", 00:26:57.922 "raid_level": "raid1", 00:26:57.922 "superblock": true, 00:26:57.922 "num_base_bdevs": 3, 00:26:57.922 "num_base_bdevs_discovered": 1, 00:26:57.922 "num_base_bdevs_operational": 3, 00:26:57.922 "base_bdevs_list": [ 00:26:57.922 { 00:26:57.922 "name": "BaseBdev1", 00:26:57.922 "uuid": "f2a9aed1-686a-437a-9015-1db069b19257", 00:26:57.922 "is_configured": true, 00:26:57.922 "data_offset": 2048, 00:26:57.922 "data_size": 63488 00:26:57.922 }, 00:26:57.922 { 00:26:57.922 "name": null, 00:26:57.922 "uuid": "d92deebe-c6d5-45c5-b81f-67413c3e6e3d", 00:26:57.922 "is_configured": false, 00:26:57.922 "data_offset": 0, 00:26:57.922 "data_size": 63488 00:26:57.922 }, 00:26:57.922 { 00:26:57.922 "name": null, 00:26:57.922 "uuid": "4d86b39b-e37a-497e-8fd5-d29d76ab8d14", 00:26:57.922 "is_configured": false, 00:26:57.922 "data_offset": 0, 00:26:57.922 "data_size": 63488 00:26:57.922 } 00:26:57.922 ] 00:26:57.922 }' 00:26:57.922 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:57.922 15:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:58.180 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:58.180 15:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.180 15:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:58.180 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:58.180 15:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.438 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:26:58.438 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:26:58.438 15:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.438 15:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:58.438 [2024-11-05 15:56:30.605413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:58.438 15:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.438 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:58.438 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:58.438 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:58.438 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:58.438 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:58.438 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:58.438 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:58.438 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:58.438 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:58.438 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:58.438 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:58.438 15:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.438 15:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:58.438 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:58.438 15:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.438 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:58.438 "name": "Existed_Raid", 00:26:58.438 "uuid": "4b9b69ef-5e8e-4e26-b56b-1d4b1b1fd204", 00:26:58.438 "strip_size_kb": 0, 00:26:58.438 "state": "configuring", 00:26:58.438 "raid_level": "raid1", 00:26:58.438 "superblock": true, 00:26:58.438 "num_base_bdevs": 3, 00:26:58.438 "num_base_bdevs_discovered": 2, 00:26:58.438 "num_base_bdevs_operational": 3, 00:26:58.438 "base_bdevs_list": [ 00:26:58.438 { 00:26:58.438 "name": "BaseBdev1", 00:26:58.438 "uuid": "f2a9aed1-686a-437a-9015-1db069b19257", 00:26:58.438 "is_configured": true, 00:26:58.438 "data_offset": 2048, 00:26:58.438 "data_size": 63488 00:26:58.438 }, 00:26:58.438 { 00:26:58.438 "name": null, 00:26:58.438 "uuid": "d92deebe-c6d5-45c5-b81f-67413c3e6e3d", 00:26:58.438 "is_configured": false, 00:26:58.438 "data_offset": 0, 00:26:58.438 "data_size": 63488 00:26:58.438 }, 00:26:58.438 { 00:26:58.438 "name": "BaseBdev3", 00:26:58.438 "uuid": "4d86b39b-e37a-497e-8fd5-d29d76ab8d14", 00:26:58.438 "is_configured": true, 00:26:58.438 "data_offset": 2048, 00:26:58.438 "data_size": 63488 00:26:58.438 } 00:26:58.438 ] 00:26:58.438 }' 00:26:58.438 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:58.439 15:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:58.697 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:58.697 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:58.697 15:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.697 15:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:58.697 15:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.697 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:26:58.697 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:58.697 15:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.697 15:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:58.697 [2024-11-05 15:56:30.949487] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:58.697 15:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.697 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:58.697 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:58.697 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:58.697 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:58.697 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:58.697 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:58.697 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:58.697 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:58.697 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:58.697 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:58.697 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:58.697 15:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.697 15:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:58.697 15:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:58.697 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.697 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:58.697 "name": "Existed_Raid", 00:26:58.697 "uuid": "4b9b69ef-5e8e-4e26-b56b-1d4b1b1fd204", 00:26:58.697 "strip_size_kb": 0, 00:26:58.697 "state": "configuring", 00:26:58.697 "raid_level": "raid1", 00:26:58.697 "superblock": true, 00:26:58.697 "num_base_bdevs": 3, 00:26:58.697 "num_base_bdevs_discovered": 1, 00:26:58.697 "num_base_bdevs_operational": 3, 00:26:58.697 "base_bdevs_list": [ 00:26:58.697 { 00:26:58.697 "name": null, 00:26:58.697 "uuid": "f2a9aed1-686a-437a-9015-1db069b19257", 00:26:58.697 "is_configured": false, 00:26:58.697 "data_offset": 0, 00:26:58.697 "data_size": 63488 00:26:58.697 }, 00:26:58.697 { 00:26:58.697 "name": null, 00:26:58.697 "uuid": "d92deebe-c6d5-45c5-b81f-67413c3e6e3d", 00:26:58.697 "is_configured": false, 00:26:58.697 "data_offset": 0, 00:26:58.697 "data_size": 63488 00:26:58.697 }, 00:26:58.697 { 00:26:58.697 "name": "BaseBdev3", 00:26:58.697 "uuid": "4d86b39b-e37a-497e-8fd5-d29d76ab8d14", 00:26:58.697 "is_configured": true, 00:26:58.697 "data_offset": 2048, 00:26:58.697 "data_size": 63488 00:26:58.697 } 00:26:58.697 ] 00:26:58.697 }' 00:26:58.697 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:58.697 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:58.956 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:58.956 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:58.956 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.956 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:59.214 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.214 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:26:59.214 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:59.214 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.214 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:59.214 [2024-11-05 15:56:31.386723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:59.214 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.214 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:59.214 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:59.214 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:59.214 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:59.214 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:59.214 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:59.214 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:59.214 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:59.214 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:59.214 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:59.214 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:59.214 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.214 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:59.214 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:59.214 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.214 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:59.214 "name": "Existed_Raid", 00:26:59.214 "uuid": "4b9b69ef-5e8e-4e26-b56b-1d4b1b1fd204", 00:26:59.214 "strip_size_kb": 0, 00:26:59.214 "state": "configuring", 00:26:59.214 "raid_level": "raid1", 00:26:59.214 "superblock": true, 00:26:59.214 "num_base_bdevs": 3, 00:26:59.214 "num_base_bdevs_discovered": 2, 00:26:59.214 "num_base_bdevs_operational": 3, 00:26:59.214 "base_bdevs_list": [ 00:26:59.214 { 00:26:59.214 "name": null, 00:26:59.214 "uuid": "f2a9aed1-686a-437a-9015-1db069b19257", 00:26:59.214 "is_configured": false, 00:26:59.214 "data_offset": 0, 00:26:59.214 "data_size": 63488 00:26:59.214 }, 00:26:59.214 { 00:26:59.214 "name": "BaseBdev2", 00:26:59.214 "uuid": "d92deebe-c6d5-45c5-b81f-67413c3e6e3d", 00:26:59.214 "is_configured": true, 00:26:59.214 "data_offset": 2048, 00:26:59.214 "data_size": 63488 00:26:59.214 }, 00:26:59.214 { 00:26:59.214 "name": "BaseBdev3", 00:26:59.214 "uuid": "4d86b39b-e37a-497e-8fd5-d29d76ab8d14", 00:26:59.214 "is_configured": true, 00:26:59.214 "data_offset": 2048, 00:26:59.214 "data_size": 63488 00:26:59.214 } 00:26:59.214 ] 00:26:59.214 }' 00:26:59.214 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:59.214 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:59.472 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:59.472 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f2a9aed1-686a-437a-9015-1db069b19257 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:59.473 [2024-11-05 15:56:31.797003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:59.473 [2024-11-05 15:56:31.797153] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:26:59.473 [2024-11-05 15:56:31.797163] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:59.473 [2024-11-05 15:56:31.797361] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:26:59.473 NewBaseBdev 00:26:59.473 [2024-11-05 15:56:31.797463] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:26:59.473 [2024-11-05 15:56:31.797471] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:26:59.473 [2024-11-05 15:56:31.797561] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:59.473 [ 00:26:59.473 { 00:26:59.473 "name": "NewBaseBdev", 00:26:59.473 "aliases": [ 00:26:59.473 "f2a9aed1-686a-437a-9015-1db069b19257" 00:26:59.473 ], 00:26:59.473 "product_name": "Malloc disk", 00:26:59.473 "block_size": 512, 00:26:59.473 "num_blocks": 65536, 00:26:59.473 "uuid": "f2a9aed1-686a-437a-9015-1db069b19257", 00:26:59.473 "assigned_rate_limits": { 00:26:59.473 "rw_ios_per_sec": 0, 00:26:59.473 "rw_mbytes_per_sec": 0, 00:26:59.473 "r_mbytes_per_sec": 0, 00:26:59.473 "w_mbytes_per_sec": 0 00:26:59.473 }, 00:26:59.473 "claimed": true, 00:26:59.473 "claim_type": "exclusive_write", 00:26:59.473 "zoned": false, 00:26:59.473 "supported_io_types": { 00:26:59.473 "read": true, 00:26:59.473 "write": true, 00:26:59.473 "unmap": true, 00:26:59.473 "flush": true, 00:26:59.473 "reset": true, 00:26:59.473 "nvme_admin": false, 00:26:59.473 "nvme_io": false, 00:26:59.473 "nvme_io_md": false, 00:26:59.473 "write_zeroes": true, 00:26:59.473 "zcopy": true, 00:26:59.473 "get_zone_info": false, 00:26:59.473 "zone_management": false, 00:26:59.473 "zone_append": false, 00:26:59.473 "compare": false, 00:26:59.473 "compare_and_write": false, 00:26:59.473 "abort": true, 00:26:59.473 "seek_hole": false, 00:26:59.473 "seek_data": false, 00:26:59.473 "copy": true, 00:26:59.473 "nvme_iov_md": false 00:26:59.473 }, 00:26:59.473 "memory_domains": [ 00:26:59.473 { 00:26:59.473 "dma_device_id": "system", 00:26:59.473 "dma_device_type": 1 00:26:59.473 }, 00:26:59.473 { 00:26:59.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:59.473 "dma_device_type": 2 00:26:59.473 } 00:26:59.473 ], 00:26:59.473 "driver_specific": {} 00:26:59.473 } 00:26:59.473 ] 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:59.473 "name": "Existed_Raid", 00:26:59.473 "uuid": "4b9b69ef-5e8e-4e26-b56b-1d4b1b1fd204", 00:26:59.473 "strip_size_kb": 0, 00:26:59.473 "state": "online", 00:26:59.473 "raid_level": "raid1", 00:26:59.473 "superblock": true, 00:26:59.473 "num_base_bdevs": 3, 00:26:59.473 "num_base_bdevs_discovered": 3, 00:26:59.473 "num_base_bdevs_operational": 3, 00:26:59.473 "base_bdevs_list": [ 00:26:59.473 { 00:26:59.473 "name": "NewBaseBdev", 00:26:59.473 "uuid": "f2a9aed1-686a-437a-9015-1db069b19257", 00:26:59.473 "is_configured": true, 00:26:59.473 "data_offset": 2048, 00:26:59.473 "data_size": 63488 00:26:59.473 }, 00:26:59.473 { 00:26:59.473 "name": "BaseBdev2", 00:26:59.473 "uuid": "d92deebe-c6d5-45c5-b81f-67413c3e6e3d", 00:26:59.473 "is_configured": true, 00:26:59.473 "data_offset": 2048, 00:26:59.473 "data_size": 63488 00:26:59.473 }, 00:26:59.473 { 00:26:59.473 "name": "BaseBdev3", 00:26:59.473 "uuid": "4d86b39b-e37a-497e-8fd5-d29d76ab8d14", 00:26:59.473 "is_configured": true, 00:26:59.473 "data_offset": 2048, 00:26:59.473 "data_size": 63488 00:26:59.473 } 00:26:59.473 ] 00:26:59.473 }' 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:59.473 15:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:59.732 15:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:26:59.732 15:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:59.732 15:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:59.732 15:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:59.732 15:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:26:59.732 15:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:59.732 15:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:59.732 15:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:59.732 15:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.732 15:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:59.732 [2024-11-05 15:56:32.133355] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:59.990 15:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.990 15:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:59.990 "name": "Existed_Raid", 00:26:59.990 "aliases": [ 00:26:59.990 "4b9b69ef-5e8e-4e26-b56b-1d4b1b1fd204" 00:26:59.990 ], 00:26:59.990 "product_name": "Raid Volume", 00:26:59.990 "block_size": 512, 00:26:59.990 "num_blocks": 63488, 00:26:59.990 "uuid": "4b9b69ef-5e8e-4e26-b56b-1d4b1b1fd204", 00:26:59.990 "assigned_rate_limits": { 00:26:59.990 "rw_ios_per_sec": 0, 00:26:59.990 "rw_mbytes_per_sec": 0, 00:26:59.990 "r_mbytes_per_sec": 0, 00:26:59.990 "w_mbytes_per_sec": 0 00:26:59.990 }, 00:26:59.990 "claimed": false, 00:26:59.990 "zoned": false, 00:26:59.990 "supported_io_types": { 00:26:59.990 "read": true, 00:26:59.990 "write": true, 00:26:59.990 "unmap": false, 00:26:59.990 "flush": false, 00:26:59.990 "reset": true, 00:26:59.990 "nvme_admin": false, 00:26:59.990 "nvme_io": false, 00:26:59.990 "nvme_io_md": false, 00:26:59.990 "write_zeroes": true, 00:26:59.990 "zcopy": false, 00:26:59.990 "get_zone_info": false, 00:26:59.990 "zone_management": false, 00:26:59.990 "zone_append": false, 00:26:59.990 "compare": false, 00:26:59.990 "compare_and_write": false, 00:26:59.990 "abort": false, 00:26:59.990 "seek_hole": false, 00:26:59.990 "seek_data": false, 00:26:59.990 "copy": false, 00:26:59.990 "nvme_iov_md": false 00:26:59.990 }, 00:26:59.990 "memory_domains": [ 00:26:59.990 { 00:26:59.990 "dma_device_id": "system", 00:26:59.990 "dma_device_type": 1 00:26:59.990 }, 00:26:59.990 { 00:26:59.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:59.990 "dma_device_type": 2 00:26:59.990 }, 00:26:59.990 { 00:26:59.990 "dma_device_id": "system", 00:26:59.990 "dma_device_type": 1 00:26:59.990 }, 00:26:59.990 { 00:26:59.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:59.990 "dma_device_type": 2 00:26:59.990 }, 00:26:59.990 { 00:26:59.990 "dma_device_id": "system", 00:26:59.990 "dma_device_type": 1 00:26:59.990 }, 00:26:59.990 { 00:26:59.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:59.990 "dma_device_type": 2 00:26:59.990 } 00:26:59.990 ], 00:26:59.991 "driver_specific": { 00:26:59.991 "raid": { 00:26:59.991 "uuid": "4b9b69ef-5e8e-4e26-b56b-1d4b1b1fd204", 00:26:59.991 "strip_size_kb": 0, 00:26:59.991 "state": "online", 00:26:59.991 "raid_level": "raid1", 00:26:59.991 "superblock": true, 00:26:59.991 "num_base_bdevs": 3, 00:26:59.991 "num_base_bdevs_discovered": 3, 00:26:59.991 "num_base_bdevs_operational": 3, 00:26:59.991 "base_bdevs_list": [ 00:26:59.991 { 00:26:59.991 "name": "NewBaseBdev", 00:26:59.991 "uuid": "f2a9aed1-686a-437a-9015-1db069b19257", 00:26:59.991 "is_configured": true, 00:26:59.991 "data_offset": 2048, 00:26:59.991 "data_size": 63488 00:26:59.991 }, 00:26:59.991 { 00:26:59.991 "name": "BaseBdev2", 00:26:59.991 "uuid": "d92deebe-c6d5-45c5-b81f-67413c3e6e3d", 00:26:59.991 "is_configured": true, 00:26:59.991 "data_offset": 2048, 00:26:59.991 "data_size": 63488 00:26:59.991 }, 00:26:59.991 { 00:26:59.991 "name": "BaseBdev3", 00:26:59.991 "uuid": "4d86b39b-e37a-497e-8fd5-d29d76ab8d14", 00:26:59.991 "is_configured": true, 00:26:59.991 "data_offset": 2048, 00:26:59.991 "data_size": 63488 00:26:59.991 } 00:26:59.991 ] 00:26:59.991 } 00:26:59.991 } 00:26:59.991 }' 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:26:59.991 BaseBdev2 00:26:59.991 BaseBdev3' 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:59.991 [2024-11-05 15:56:32.333123] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:59.991 [2024-11-05 15:56:32.333146] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:59.991 [2024-11-05 15:56:32.333196] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:59.991 [2024-11-05 15:56:32.333418] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:59.991 [2024-11-05 15:56:32.333426] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66192 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 66192 ']' 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 66192 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66192 00:26:59.991 killing process with pid 66192 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66192' 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 66192 00:26:59.991 [2024-11-05 15:56:32.362806] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:59.991 15:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 66192 00:27:00.249 [2024-11-05 15:56:32.508064] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:00.814 ************************************ 00:27:00.815 END TEST raid_state_function_test_sb 00:27:00.815 ************************************ 00:27:00.815 15:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:27:00.815 00:27:00.815 real 0m7.600s 00:27:00.815 user 0m12.412s 00:27:00.815 sys 0m1.154s 00:27:00.815 15:56:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:00.815 15:56:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:00.815 15:56:33 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:27:00.815 15:56:33 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:00.815 15:56:33 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:00.815 15:56:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:00.815 ************************************ 00:27:00.815 START TEST raid_superblock_test 00:27:00.815 ************************************ 00:27:00.815 15:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 3 00:27:00.815 15:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:27:00.815 15:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:27:00.815 15:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:27:00.815 15:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:27:00.815 15:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:27:00.815 15:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:27:00.815 15:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:27:00.815 15:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:27:00.815 15:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:27:00.815 15:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:27:00.815 15:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:27:00.815 15:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:27:00.815 15:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:27:00.815 15:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:27:00.815 15:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:27:00.815 15:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66784 00:27:00.815 15:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:27:00.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:00.815 15:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66784 00:27:00.815 15:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 66784 ']' 00:27:00.815 15:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:00.815 15:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:00.815 15:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:00.815 15:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:00.815 15:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:00.815 [2024-11-05 15:56:33.161979] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:27:00.815 [2024-11-05 15:56:33.162229] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66784 ] 00:27:01.073 [2024-11-05 15:56:33.316815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.073 [2024-11-05 15:56:33.398426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:01.331 [2024-11-05 15:56:33.506120] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:01.331 [2024-11-05 15:56:33.506247] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:01.588 15:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:01.588 15:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:27:01.588 15:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:27:01.588 15:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:01.588 15:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:27:01.588 15:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:27:01.589 15:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:27:01.589 15:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:01.589 15:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:01.589 15:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:01.589 15:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:27:01.589 15:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.589 15:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:01.847 malloc1 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:01.847 [2024-11-05 15:56:34.021998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:01.847 [2024-11-05 15:56:34.022048] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:01.847 [2024-11-05 15:56:34.022065] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:27:01.847 [2024-11-05 15:56:34.022072] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:01.847 [2024-11-05 15:56:34.023820] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:01.847 [2024-11-05 15:56:34.023856] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:01.847 pt1 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:01.847 malloc2 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:01.847 [2024-11-05 15:56:34.053144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:01.847 [2024-11-05 15:56:34.053259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:01.847 [2024-11-05 15:56:34.053279] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:01.847 [2024-11-05 15:56:34.053286] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:01.847 [2024-11-05 15:56:34.054972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:01.847 [2024-11-05 15:56:34.054999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:01.847 pt2 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:01.847 malloc3 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:01.847 [2024-11-05 15:56:34.096086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:01.847 [2024-11-05 15:56:34.096207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:01.847 [2024-11-05 15:56:34.096229] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:27:01.847 [2024-11-05 15:56:34.096236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:01.847 [2024-11-05 15:56:34.097918] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:01.847 [2024-11-05 15:56:34.097944] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:01.847 pt3 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:01.847 [2024-11-05 15:56:34.104119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:01.847 [2024-11-05 15:56:34.105536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:01.847 [2024-11-05 15:56:34.105581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:01.847 [2024-11-05 15:56:34.105700] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:27:01.847 [2024-11-05 15:56:34.105712] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:01.847 [2024-11-05 15:56:34.105912] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:27:01.847 [2024-11-05 15:56:34.106027] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:27:01.847 [2024-11-05 15:56:34.106036] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:27:01.847 [2024-11-05 15:56:34.106140] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:01.847 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.848 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:01.848 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:01.848 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:01.848 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:01.848 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:01.848 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:01.848 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:01.848 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:01.848 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:01.848 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:01.848 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:01.848 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:01.848 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.848 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:01.848 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.848 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:01.848 "name": "raid_bdev1", 00:27:01.848 "uuid": "f8c4af0d-14bb-40bb-9683-9614108caea6", 00:27:01.848 "strip_size_kb": 0, 00:27:01.848 "state": "online", 00:27:01.848 "raid_level": "raid1", 00:27:01.848 "superblock": true, 00:27:01.848 "num_base_bdevs": 3, 00:27:01.848 "num_base_bdevs_discovered": 3, 00:27:01.848 "num_base_bdevs_operational": 3, 00:27:01.848 "base_bdevs_list": [ 00:27:01.848 { 00:27:01.848 "name": "pt1", 00:27:01.848 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:01.848 "is_configured": true, 00:27:01.848 "data_offset": 2048, 00:27:01.848 "data_size": 63488 00:27:01.848 }, 00:27:01.848 { 00:27:01.848 "name": "pt2", 00:27:01.848 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:01.848 "is_configured": true, 00:27:01.848 "data_offset": 2048, 00:27:01.848 "data_size": 63488 00:27:01.848 }, 00:27:01.848 { 00:27:01.848 "name": "pt3", 00:27:01.848 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:01.848 "is_configured": true, 00:27:01.848 "data_offset": 2048, 00:27:01.848 "data_size": 63488 00:27:01.848 } 00:27:01.848 ] 00:27:01.848 }' 00:27:01.848 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:01.848 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.106 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:27:02.106 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:27:02.106 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:02.106 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:02.106 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:27:02.106 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:02.106 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:02.106 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:02.106 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.106 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.106 [2024-11-05 15:56:34.424436] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:02.106 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.106 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:02.106 "name": "raid_bdev1", 00:27:02.106 "aliases": [ 00:27:02.106 "f8c4af0d-14bb-40bb-9683-9614108caea6" 00:27:02.106 ], 00:27:02.106 "product_name": "Raid Volume", 00:27:02.106 "block_size": 512, 00:27:02.106 "num_blocks": 63488, 00:27:02.106 "uuid": "f8c4af0d-14bb-40bb-9683-9614108caea6", 00:27:02.106 "assigned_rate_limits": { 00:27:02.106 "rw_ios_per_sec": 0, 00:27:02.106 "rw_mbytes_per_sec": 0, 00:27:02.106 "r_mbytes_per_sec": 0, 00:27:02.106 "w_mbytes_per_sec": 0 00:27:02.106 }, 00:27:02.106 "claimed": false, 00:27:02.106 "zoned": false, 00:27:02.106 "supported_io_types": { 00:27:02.106 "read": true, 00:27:02.106 "write": true, 00:27:02.106 "unmap": false, 00:27:02.106 "flush": false, 00:27:02.106 "reset": true, 00:27:02.106 "nvme_admin": false, 00:27:02.106 "nvme_io": false, 00:27:02.106 "nvme_io_md": false, 00:27:02.106 "write_zeroes": true, 00:27:02.106 "zcopy": false, 00:27:02.106 "get_zone_info": false, 00:27:02.106 "zone_management": false, 00:27:02.106 "zone_append": false, 00:27:02.106 "compare": false, 00:27:02.106 "compare_and_write": false, 00:27:02.106 "abort": false, 00:27:02.106 "seek_hole": false, 00:27:02.106 "seek_data": false, 00:27:02.106 "copy": false, 00:27:02.106 "nvme_iov_md": false 00:27:02.106 }, 00:27:02.106 "memory_domains": [ 00:27:02.106 { 00:27:02.106 "dma_device_id": "system", 00:27:02.106 "dma_device_type": 1 00:27:02.106 }, 00:27:02.106 { 00:27:02.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:02.106 "dma_device_type": 2 00:27:02.106 }, 00:27:02.106 { 00:27:02.106 "dma_device_id": "system", 00:27:02.106 "dma_device_type": 1 00:27:02.106 }, 00:27:02.106 { 00:27:02.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:02.106 "dma_device_type": 2 00:27:02.106 }, 00:27:02.106 { 00:27:02.106 "dma_device_id": "system", 00:27:02.106 "dma_device_type": 1 00:27:02.106 }, 00:27:02.106 { 00:27:02.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:02.106 "dma_device_type": 2 00:27:02.106 } 00:27:02.106 ], 00:27:02.106 "driver_specific": { 00:27:02.106 "raid": { 00:27:02.106 "uuid": "f8c4af0d-14bb-40bb-9683-9614108caea6", 00:27:02.106 "strip_size_kb": 0, 00:27:02.106 "state": "online", 00:27:02.106 "raid_level": "raid1", 00:27:02.106 "superblock": true, 00:27:02.106 "num_base_bdevs": 3, 00:27:02.106 "num_base_bdevs_discovered": 3, 00:27:02.106 "num_base_bdevs_operational": 3, 00:27:02.106 "base_bdevs_list": [ 00:27:02.106 { 00:27:02.106 "name": "pt1", 00:27:02.106 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:02.106 "is_configured": true, 00:27:02.106 "data_offset": 2048, 00:27:02.106 "data_size": 63488 00:27:02.106 }, 00:27:02.106 { 00:27:02.106 "name": "pt2", 00:27:02.106 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:02.106 "is_configured": true, 00:27:02.106 "data_offset": 2048, 00:27:02.106 "data_size": 63488 00:27:02.106 }, 00:27:02.106 { 00:27:02.106 "name": "pt3", 00:27:02.106 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:02.106 "is_configured": true, 00:27:02.106 "data_offset": 2048, 00:27:02.106 "data_size": 63488 00:27:02.106 } 00:27:02.106 ] 00:27:02.106 } 00:27:02.106 } 00:27:02.106 }' 00:27:02.106 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:02.106 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:27:02.106 pt2 00:27:02.107 pt3' 00:27:02.107 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:02.107 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:02.107 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:02.107 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:27:02.107 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.107 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.107 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:02.107 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.365 [2024-11-05 15:56:34.600398] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f8c4af0d-14bb-40bb-9683-9614108caea6 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f8c4af0d-14bb-40bb-9683-9614108caea6 ']' 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.365 [2024-11-05 15:56:34.628173] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:02.365 [2024-11-05 15:56:34.628191] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:02.365 [2024-11-05 15:56:34.628242] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:02.365 [2024-11-05 15:56:34.628304] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:02.365 [2024-11-05 15:56:34.628312] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:02.365 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:02.366 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:27:02.366 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.366 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.366 [2024-11-05 15:56:34.724244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:27:02.366 [2024-11-05 15:56:34.725755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:27:02.366 [2024-11-05 15:56:34.725796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:27:02.366 [2024-11-05 15:56:34.725833] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:27:02.366 [2024-11-05 15:56:34.725886] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:27:02.366 [2024-11-05 15:56:34.725902] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:27:02.366 [2024-11-05 15:56:34.725917] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:02.366 [2024-11-05 15:56:34.725924] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:27:02.366 request: 00:27:02.366 { 00:27:02.366 "name": "raid_bdev1", 00:27:02.366 "raid_level": "raid1", 00:27:02.366 "base_bdevs": [ 00:27:02.366 "malloc1", 00:27:02.366 "malloc2", 00:27:02.366 "malloc3" 00:27:02.366 ], 00:27:02.366 "superblock": false, 00:27:02.366 "method": "bdev_raid_create", 00:27:02.366 "req_id": 1 00:27:02.366 } 00:27:02.366 Got JSON-RPC error response 00:27:02.366 response: 00:27:02.366 { 00:27:02.366 "code": -17, 00:27:02.366 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:27:02.366 } 00:27:02.366 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:02.366 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:27:02.366 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:02.366 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:02.366 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:02.366 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:02.366 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.366 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.366 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:27:02.366 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.366 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:27:02.366 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:27:02.366 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:02.366 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.366 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.366 [2024-11-05 15:56:34.768207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:02.366 [2024-11-05 15:56:34.768244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:02.366 [2024-11-05 15:56:34.768259] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:27:02.366 [2024-11-05 15:56:34.768266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:02.366 [2024-11-05 15:56:34.770026] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:02.366 [2024-11-05 15:56:34.770051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:02.366 [2024-11-05 15:56:34.770107] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:02.366 [2024-11-05 15:56:34.770142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:02.366 pt1 00:27:02.366 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.366 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:27:02.366 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:02.366 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:02.366 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:02.366 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:02.366 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:02.366 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:02.366 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:02.366 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:02.366 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:02.366 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:02.366 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.366 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.366 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:02.624 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.624 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:02.624 "name": "raid_bdev1", 00:27:02.624 "uuid": "f8c4af0d-14bb-40bb-9683-9614108caea6", 00:27:02.624 "strip_size_kb": 0, 00:27:02.624 "state": "configuring", 00:27:02.624 "raid_level": "raid1", 00:27:02.624 "superblock": true, 00:27:02.624 "num_base_bdevs": 3, 00:27:02.624 "num_base_bdevs_discovered": 1, 00:27:02.624 "num_base_bdevs_operational": 3, 00:27:02.624 "base_bdevs_list": [ 00:27:02.624 { 00:27:02.624 "name": "pt1", 00:27:02.624 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:02.624 "is_configured": true, 00:27:02.624 "data_offset": 2048, 00:27:02.624 "data_size": 63488 00:27:02.624 }, 00:27:02.624 { 00:27:02.624 "name": null, 00:27:02.624 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:02.624 "is_configured": false, 00:27:02.624 "data_offset": 2048, 00:27:02.624 "data_size": 63488 00:27:02.624 }, 00:27:02.624 { 00:27:02.624 "name": null, 00:27:02.624 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:02.624 "is_configured": false, 00:27:02.624 "data_offset": 2048, 00:27:02.624 "data_size": 63488 00:27:02.624 } 00:27:02.624 ] 00:27:02.624 }' 00:27:02.624 15:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:02.624 15:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.883 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:27:02.883 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:02.883 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.883 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.883 [2024-11-05 15:56:35.092291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:02.883 [2024-11-05 15:56:35.092339] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:02.883 [2024-11-05 15:56:35.092355] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:27:02.883 [2024-11-05 15:56:35.092362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:02.883 [2024-11-05 15:56:35.092696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:02.883 [2024-11-05 15:56:35.092706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:02.883 [2024-11-05 15:56:35.092767] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:02.883 [2024-11-05 15:56:35.092783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:02.883 pt2 00:27:02.883 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.883 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:27:02.883 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.883 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.883 [2024-11-05 15:56:35.100286] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:27:02.883 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.883 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:27:02.883 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:02.883 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:02.883 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:02.883 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:02.883 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:02.883 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:02.883 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:02.883 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:02.883 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:02.883 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:02.883 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:02.883 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.883 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.883 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.883 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:02.883 "name": "raid_bdev1", 00:27:02.883 "uuid": "f8c4af0d-14bb-40bb-9683-9614108caea6", 00:27:02.883 "strip_size_kb": 0, 00:27:02.883 "state": "configuring", 00:27:02.883 "raid_level": "raid1", 00:27:02.883 "superblock": true, 00:27:02.883 "num_base_bdevs": 3, 00:27:02.883 "num_base_bdevs_discovered": 1, 00:27:02.883 "num_base_bdevs_operational": 3, 00:27:02.883 "base_bdevs_list": [ 00:27:02.883 { 00:27:02.883 "name": "pt1", 00:27:02.883 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:02.883 "is_configured": true, 00:27:02.883 "data_offset": 2048, 00:27:02.883 "data_size": 63488 00:27:02.883 }, 00:27:02.883 { 00:27:02.883 "name": null, 00:27:02.883 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:02.883 "is_configured": false, 00:27:02.883 "data_offset": 0, 00:27:02.883 "data_size": 63488 00:27:02.883 }, 00:27:02.883 { 00:27:02.883 "name": null, 00:27:02.883 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:02.883 "is_configured": false, 00:27:02.883 "data_offset": 2048, 00:27:02.883 "data_size": 63488 00:27:02.883 } 00:27:02.883 ] 00:27:02.883 }' 00:27:02.883 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:02.883 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:03.142 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:27:03.142 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:03.142 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:03.142 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.142 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:03.142 [2024-11-05 15:56:35.416342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:03.142 [2024-11-05 15:56:35.416393] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:03.142 [2024-11-05 15:56:35.416407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:27:03.142 [2024-11-05 15:56:35.416416] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:03.142 [2024-11-05 15:56:35.416757] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:03.142 [2024-11-05 15:56:35.416770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:03.142 [2024-11-05 15:56:35.416824] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:03.142 [2024-11-05 15:56:35.416861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:03.142 pt2 00:27:03.142 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.142 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:27:03.142 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:03.142 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:03.142 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.142 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:03.142 [2024-11-05 15:56:35.424327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:03.142 [2024-11-05 15:56:35.424450] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:03.142 [2024-11-05 15:56:35.424468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:27:03.142 [2024-11-05 15:56:35.424476] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:03.142 [2024-11-05 15:56:35.424756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:03.142 [2024-11-05 15:56:35.424776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:03.142 [2024-11-05 15:56:35.424820] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:27:03.142 [2024-11-05 15:56:35.424835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:03.142 [2024-11-05 15:56:35.424939] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:03.143 [2024-11-05 15:56:35.424949] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:03.143 [2024-11-05 15:56:35.425136] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:03.143 [2024-11-05 15:56:35.425243] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:03.143 [2024-11-05 15:56:35.425250] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:27:03.143 [2024-11-05 15:56:35.425351] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:03.143 pt3 00:27:03.143 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.143 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:27:03.143 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:03.143 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:03.143 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:03.143 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:03.143 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:03.143 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:03.143 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:03.143 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:03.143 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:03.143 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:03.143 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:03.143 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:03.143 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.143 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:03.143 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:03.143 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.143 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:03.143 "name": "raid_bdev1", 00:27:03.143 "uuid": "f8c4af0d-14bb-40bb-9683-9614108caea6", 00:27:03.143 "strip_size_kb": 0, 00:27:03.143 "state": "online", 00:27:03.143 "raid_level": "raid1", 00:27:03.143 "superblock": true, 00:27:03.143 "num_base_bdevs": 3, 00:27:03.143 "num_base_bdevs_discovered": 3, 00:27:03.143 "num_base_bdevs_operational": 3, 00:27:03.143 "base_bdevs_list": [ 00:27:03.143 { 00:27:03.143 "name": "pt1", 00:27:03.143 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:03.143 "is_configured": true, 00:27:03.143 "data_offset": 2048, 00:27:03.143 "data_size": 63488 00:27:03.143 }, 00:27:03.143 { 00:27:03.143 "name": "pt2", 00:27:03.143 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:03.143 "is_configured": true, 00:27:03.143 "data_offset": 2048, 00:27:03.143 "data_size": 63488 00:27:03.143 }, 00:27:03.143 { 00:27:03.143 "name": "pt3", 00:27:03.143 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:03.143 "is_configured": true, 00:27:03.143 "data_offset": 2048, 00:27:03.143 "data_size": 63488 00:27:03.143 } 00:27:03.143 ] 00:27:03.143 }' 00:27:03.143 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:03.143 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:03.401 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:27:03.401 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:27:03.401 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:03.401 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:03.401 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:27:03.401 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:03.401 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:03.401 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.401 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:03.401 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:03.401 [2024-11-05 15:56:35.732664] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:03.401 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.401 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:03.401 "name": "raid_bdev1", 00:27:03.401 "aliases": [ 00:27:03.401 "f8c4af0d-14bb-40bb-9683-9614108caea6" 00:27:03.401 ], 00:27:03.401 "product_name": "Raid Volume", 00:27:03.401 "block_size": 512, 00:27:03.401 "num_blocks": 63488, 00:27:03.401 "uuid": "f8c4af0d-14bb-40bb-9683-9614108caea6", 00:27:03.401 "assigned_rate_limits": { 00:27:03.401 "rw_ios_per_sec": 0, 00:27:03.401 "rw_mbytes_per_sec": 0, 00:27:03.401 "r_mbytes_per_sec": 0, 00:27:03.401 "w_mbytes_per_sec": 0 00:27:03.401 }, 00:27:03.401 "claimed": false, 00:27:03.401 "zoned": false, 00:27:03.401 "supported_io_types": { 00:27:03.401 "read": true, 00:27:03.401 "write": true, 00:27:03.401 "unmap": false, 00:27:03.401 "flush": false, 00:27:03.401 "reset": true, 00:27:03.401 "nvme_admin": false, 00:27:03.401 "nvme_io": false, 00:27:03.401 "nvme_io_md": false, 00:27:03.401 "write_zeroes": true, 00:27:03.401 "zcopy": false, 00:27:03.401 "get_zone_info": false, 00:27:03.401 "zone_management": false, 00:27:03.401 "zone_append": false, 00:27:03.401 "compare": false, 00:27:03.401 "compare_and_write": false, 00:27:03.401 "abort": false, 00:27:03.401 "seek_hole": false, 00:27:03.401 "seek_data": false, 00:27:03.401 "copy": false, 00:27:03.401 "nvme_iov_md": false 00:27:03.401 }, 00:27:03.401 "memory_domains": [ 00:27:03.401 { 00:27:03.401 "dma_device_id": "system", 00:27:03.401 "dma_device_type": 1 00:27:03.401 }, 00:27:03.401 { 00:27:03.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:03.401 "dma_device_type": 2 00:27:03.401 }, 00:27:03.401 { 00:27:03.401 "dma_device_id": "system", 00:27:03.401 "dma_device_type": 1 00:27:03.401 }, 00:27:03.401 { 00:27:03.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:03.401 "dma_device_type": 2 00:27:03.401 }, 00:27:03.401 { 00:27:03.401 "dma_device_id": "system", 00:27:03.402 "dma_device_type": 1 00:27:03.402 }, 00:27:03.402 { 00:27:03.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:03.402 "dma_device_type": 2 00:27:03.402 } 00:27:03.402 ], 00:27:03.402 "driver_specific": { 00:27:03.402 "raid": { 00:27:03.402 "uuid": "f8c4af0d-14bb-40bb-9683-9614108caea6", 00:27:03.402 "strip_size_kb": 0, 00:27:03.402 "state": "online", 00:27:03.402 "raid_level": "raid1", 00:27:03.402 "superblock": true, 00:27:03.402 "num_base_bdevs": 3, 00:27:03.402 "num_base_bdevs_discovered": 3, 00:27:03.402 "num_base_bdevs_operational": 3, 00:27:03.402 "base_bdevs_list": [ 00:27:03.402 { 00:27:03.402 "name": "pt1", 00:27:03.402 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:03.402 "is_configured": true, 00:27:03.402 "data_offset": 2048, 00:27:03.402 "data_size": 63488 00:27:03.402 }, 00:27:03.402 { 00:27:03.402 "name": "pt2", 00:27:03.402 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:03.402 "is_configured": true, 00:27:03.402 "data_offset": 2048, 00:27:03.402 "data_size": 63488 00:27:03.402 }, 00:27:03.402 { 00:27:03.402 "name": "pt3", 00:27:03.402 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:03.402 "is_configured": true, 00:27:03.402 "data_offset": 2048, 00:27:03.402 "data_size": 63488 00:27:03.402 } 00:27:03.402 ] 00:27:03.402 } 00:27:03.402 } 00:27:03.402 }' 00:27:03.402 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:03.402 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:27:03.402 pt2 00:27:03.402 pt3' 00:27:03.402 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:03.660 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:03.660 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:03.660 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:27:03.660 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.660 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:03.660 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:03.660 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.660 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:03.660 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:03.660 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:03.660 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:03.660 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:27:03.660 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.660 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:03.660 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.660 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:03.660 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:03.660 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:03.660 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:03.660 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:27:03.660 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.660 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:03.660 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.660 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:03.660 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:03.660 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:03.660 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:27:03.660 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.660 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:03.660 [2024-11-05 15:56:35.924655] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:03.660 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.660 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f8c4af0d-14bb-40bb-9683-9614108caea6 '!=' f8c4af0d-14bb-40bb-9683-9614108caea6 ']' 00:27:03.660 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:27:03.660 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:03.660 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:27:03.660 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:27:03.661 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.661 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:03.661 [2024-11-05 15:56:35.952460] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:27:03.661 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.661 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:03.661 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:03.661 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:03.661 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:03.661 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:03.661 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:03.661 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:03.661 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:03.661 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:03.661 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:03.661 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:03.661 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:03.661 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.661 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:03.661 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.661 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:03.661 "name": "raid_bdev1", 00:27:03.661 "uuid": "f8c4af0d-14bb-40bb-9683-9614108caea6", 00:27:03.661 "strip_size_kb": 0, 00:27:03.661 "state": "online", 00:27:03.661 "raid_level": "raid1", 00:27:03.661 "superblock": true, 00:27:03.661 "num_base_bdevs": 3, 00:27:03.661 "num_base_bdevs_discovered": 2, 00:27:03.661 "num_base_bdevs_operational": 2, 00:27:03.661 "base_bdevs_list": [ 00:27:03.661 { 00:27:03.661 "name": null, 00:27:03.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:03.661 "is_configured": false, 00:27:03.661 "data_offset": 0, 00:27:03.661 "data_size": 63488 00:27:03.661 }, 00:27:03.661 { 00:27:03.661 "name": "pt2", 00:27:03.661 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:03.661 "is_configured": true, 00:27:03.661 "data_offset": 2048, 00:27:03.661 "data_size": 63488 00:27:03.661 }, 00:27:03.661 { 00:27:03.661 "name": "pt3", 00:27:03.661 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:03.661 "is_configured": true, 00:27:03.661 "data_offset": 2048, 00:27:03.661 "data_size": 63488 00:27:03.661 } 00:27:03.661 ] 00:27:03.661 }' 00:27:03.661 15:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:03.661 15:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:03.919 [2024-11-05 15:56:36.252494] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:03.919 [2024-11-05 15:56:36.252518] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:03.919 [2024-11-05 15:56:36.252572] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:03.919 [2024-11-05 15:56:36.252619] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:03.919 [2024-11-05 15:56:36.252630] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:03.919 [2024-11-05 15:56:36.308479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:03.919 [2024-11-05 15:56:36.308521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:03.919 [2024-11-05 15:56:36.308533] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:27:03.919 [2024-11-05 15:56:36.308542] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:03.919 [2024-11-05 15:56:36.310350] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:03.919 [2024-11-05 15:56:36.310381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:03.919 [2024-11-05 15:56:36.310435] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:03.919 [2024-11-05 15:56:36.310469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:03.919 pt2 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:03.919 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.177 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:04.177 "name": "raid_bdev1", 00:27:04.177 "uuid": "f8c4af0d-14bb-40bb-9683-9614108caea6", 00:27:04.177 "strip_size_kb": 0, 00:27:04.177 "state": "configuring", 00:27:04.177 "raid_level": "raid1", 00:27:04.177 "superblock": true, 00:27:04.177 "num_base_bdevs": 3, 00:27:04.177 "num_base_bdevs_discovered": 1, 00:27:04.177 "num_base_bdevs_operational": 2, 00:27:04.177 "base_bdevs_list": [ 00:27:04.177 { 00:27:04.177 "name": null, 00:27:04.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:04.177 "is_configured": false, 00:27:04.177 "data_offset": 2048, 00:27:04.177 "data_size": 63488 00:27:04.177 }, 00:27:04.177 { 00:27:04.177 "name": "pt2", 00:27:04.177 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:04.177 "is_configured": true, 00:27:04.177 "data_offset": 2048, 00:27:04.177 "data_size": 63488 00:27:04.177 }, 00:27:04.177 { 00:27:04.177 "name": null, 00:27:04.177 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:04.177 "is_configured": false, 00:27:04.177 "data_offset": 2048, 00:27:04.177 "data_size": 63488 00:27:04.177 } 00:27:04.177 ] 00:27:04.177 }' 00:27:04.177 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:04.177 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:04.436 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:27:04.436 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:27:04.436 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:27:04.436 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:04.436 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.436 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:04.436 [2024-11-05 15:56:36.608568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:04.436 [2024-11-05 15:56:36.608622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:04.436 [2024-11-05 15:56:36.608637] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:27:04.436 [2024-11-05 15:56:36.608647] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:04.436 [2024-11-05 15:56:36.608995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:04.436 [2024-11-05 15:56:36.609014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:04.436 [2024-11-05 15:56:36.609077] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:27:04.436 [2024-11-05 15:56:36.609096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:04.436 [2024-11-05 15:56:36.609179] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:27:04.436 [2024-11-05 15:56:36.609197] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:04.436 [2024-11-05 15:56:36.609386] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:27:04.436 [2024-11-05 15:56:36.609490] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:27:04.436 [2024-11-05 15:56:36.609503] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:27:04.436 [2024-11-05 15:56:36.609604] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:04.436 pt3 00:27:04.436 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.436 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:04.436 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:04.436 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:04.436 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:04.436 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:04.436 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:04.436 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:04.437 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:04.437 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:04.437 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:04.437 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:04.437 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:04.437 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.437 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:04.437 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.437 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:04.437 "name": "raid_bdev1", 00:27:04.437 "uuid": "f8c4af0d-14bb-40bb-9683-9614108caea6", 00:27:04.437 "strip_size_kb": 0, 00:27:04.437 "state": "online", 00:27:04.437 "raid_level": "raid1", 00:27:04.437 "superblock": true, 00:27:04.437 "num_base_bdevs": 3, 00:27:04.437 "num_base_bdevs_discovered": 2, 00:27:04.437 "num_base_bdevs_operational": 2, 00:27:04.437 "base_bdevs_list": [ 00:27:04.437 { 00:27:04.437 "name": null, 00:27:04.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:04.437 "is_configured": false, 00:27:04.437 "data_offset": 2048, 00:27:04.437 "data_size": 63488 00:27:04.437 }, 00:27:04.437 { 00:27:04.437 "name": "pt2", 00:27:04.437 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:04.437 "is_configured": true, 00:27:04.437 "data_offset": 2048, 00:27:04.437 "data_size": 63488 00:27:04.437 }, 00:27:04.437 { 00:27:04.437 "name": "pt3", 00:27:04.437 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:04.437 "is_configured": true, 00:27:04.437 "data_offset": 2048, 00:27:04.437 "data_size": 63488 00:27:04.437 } 00:27:04.437 ] 00:27:04.437 }' 00:27:04.437 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:04.437 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:04.696 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:04.696 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.696 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:04.696 [2024-11-05 15:56:36.928604] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:04.696 [2024-11-05 15:56:36.928632] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:04.696 [2024-11-05 15:56:36.928686] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:04.696 [2024-11-05 15:56:36.928735] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:04.696 [2024-11-05 15:56:36.928742] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:27:04.696 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.696 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:27:04.696 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:04.696 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.696 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:04.696 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.696 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:27:04.696 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:27:04.696 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:27:04.696 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:27:04.696 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:27:04.696 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.696 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:04.696 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.696 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:04.696 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.696 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:04.696 [2024-11-05 15:56:36.980616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:04.696 [2024-11-05 15:56:36.980659] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:04.696 [2024-11-05 15:56:36.980674] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:27:04.696 [2024-11-05 15:56:36.980681] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:04.696 [2024-11-05 15:56:36.982460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:04.696 [2024-11-05 15:56:36.982492] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:04.696 [2024-11-05 15:56:36.982549] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:04.696 [2024-11-05 15:56:36.982579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:04.696 [2024-11-05 15:56:36.982670] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:27:04.696 [2024-11-05 15:56:36.982684] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:04.696 [2024-11-05 15:56:36.982697] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:27:04.696 [2024-11-05 15:56:36.982736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:04.696 pt1 00:27:04.696 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.696 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:27:04.696 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:27:04.696 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:04.696 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:04.696 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:04.696 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:04.696 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:04.696 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:04.696 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:04.696 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:04.696 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:04.696 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:04.696 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.696 15:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:04.696 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:04.696 15:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.696 15:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:04.696 "name": "raid_bdev1", 00:27:04.696 "uuid": "f8c4af0d-14bb-40bb-9683-9614108caea6", 00:27:04.696 "strip_size_kb": 0, 00:27:04.696 "state": "configuring", 00:27:04.696 "raid_level": "raid1", 00:27:04.696 "superblock": true, 00:27:04.696 "num_base_bdevs": 3, 00:27:04.696 "num_base_bdevs_discovered": 1, 00:27:04.696 "num_base_bdevs_operational": 2, 00:27:04.696 "base_bdevs_list": [ 00:27:04.696 { 00:27:04.696 "name": null, 00:27:04.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:04.696 "is_configured": false, 00:27:04.696 "data_offset": 2048, 00:27:04.696 "data_size": 63488 00:27:04.696 }, 00:27:04.696 { 00:27:04.696 "name": "pt2", 00:27:04.696 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:04.696 "is_configured": true, 00:27:04.696 "data_offset": 2048, 00:27:04.696 "data_size": 63488 00:27:04.696 }, 00:27:04.696 { 00:27:04.696 "name": null, 00:27:04.696 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:04.696 "is_configured": false, 00:27:04.696 "data_offset": 2048, 00:27:04.696 "data_size": 63488 00:27:04.696 } 00:27:04.696 ] 00:27:04.696 }' 00:27:04.696 15:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:04.696 15:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:04.955 15:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:27:04.955 15:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:27:04.955 15:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.955 15:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:04.955 15:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.955 15:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:27:04.955 15:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:04.955 15:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.955 15:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:04.955 [2024-11-05 15:56:37.320685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:04.955 [2024-11-05 15:56:37.320732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:04.955 [2024-11-05 15:56:37.320747] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:27:04.955 [2024-11-05 15:56:37.320754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:04.955 [2024-11-05 15:56:37.321096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:04.955 [2024-11-05 15:56:37.321113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:04.955 [2024-11-05 15:56:37.321171] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:27:04.955 [2024-11-05 15:56:37.321200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:04.955 [2024-11-05 15:56:37.321286] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:27:04.955 [2024-11-05 15:56:37.321297] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:04.955 [2024-11-05 15:56:37.321490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:27:04.955 [2024-11-05 15:56:37.321607] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:27:04.955 [2024-11-05 15:56:37.321618] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:27:04.955 [2024-11-05 15:56:37.321718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:04.955 pt3 00:27:04.955 15:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.955 15:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:04.955 15:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:04.955 15:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:04.955 15:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:04.955 15:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:04.955 15:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:04.955 15:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:04.955 15:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:04.955 15:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:04.955 15:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:04.955 15:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:04.955 15:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:04.955 15:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.955 15:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:04.955 15:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.955 15:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:04.955 "name": "raid_bdev1", 00:27:04.955 "uuid": "f8c4af0d-14bb-40bb-9683-9614108caea6", 00:27:04.955 "strip_size_kb": 0, 00:27:04.955 "state": "online", 00:27:04.955 "raid_level": "raid1", 00:27:04.955 "superblock": true, 00:27:04.955 "num_base_bdevs": 3, 00:27:04.955 "num_base_bdevs_discovered": 2, 00:27:04.955 "num_base_bdevs_operational": 2, 00:27:04.955 "base_bdevs_list": [ 00:27:04.955 { 00:27:04.955 "name": null, 00:27:04.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:04.955 "is_configured": false, 00:27:04.955 "data_offset": 2048, 00:27:04.955 "data_size": 63488 00:27:04.955 }, 00:27:04.955 { 00:27:04.955 "name": "pt2", 00:27:04.955 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:04.955 "is_configured": true, 00:27:04.955 "data_offset": 2048, 00:27:04.955 "data_size": 63488 00:27:04.955 }, 00:27:04.955 { 00:27:04.955 "name": "pt3", 00:27:04.955 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:04.955 "is_configured": true, 00:27:04.955 "data_offset": 2048, 00:27:04.955 "data_size": 63488 00:27:04.955 } 00:27:04.955 ] 00:27:04.955 }' 00:27:04.955 15:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:04.955 15:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:05.214 15:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:27:05.214 15:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.214 15:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:05.214 15:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:27:05.472 15:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.472 15:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:27:05.472 15:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:05.472 15:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.472 15:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:05.472 15:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:27:05.472 [2024-11-05 15:56:37.664982] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:05.472 15:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.472 15:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' f8c4af0d-14bb-40bb-9683-9614108caea6 '!=' f8c4af0d-14bb-40bb-9683-9614108caea6 ']' 00:27:05.472 15:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66784 00:27:05.472 15:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 66784 ']' 00:27:05.472 15:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 66784 00:27:05.472 15:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:27:05.472 15:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:05.472 15:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66784 00:27:05.472 15:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:05.472 15:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:05.472 killing process with pid 66784 00:27:05.472 15:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66784' 00:27:05.472 15:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 66784 00:27:05.472 [2024-11-05 15:56:37.722308] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:05.472 [2024-11-05 15:56:37.722382] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:05.472 15:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 66784 00:27:05.472 [2024-11-05 15:56:37.722429] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:05.472 [2024-11-05 15:56:37.722438] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:27:05.472 [2024-11-05 15:56:37.867038] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:06.038 15:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:27:06.038 00:27:06.038 real 0m5.328s 00:27:06.038 user 0m8.474s 00:27:06.038 sys 0m0.865s 00:27:06.038 15:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:06.038 15:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:06.038 ************************************ 00:27:06.038 END TEST raid_superblock_test 00:27:06.038 ************************************ 00:27:06.297 15:56:38 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:27:06.297 15:56:38 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:27:06.297 15:56:38 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:06.297 15:56:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:06.297 ************************************ 00:27:06.297 START TEST raid_read_error_test 00:27:06.297 ************************************ 00:27:06.297 15:56:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 read 00:27:06.297 15:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:27:06.297 15:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:27:06.297 15:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:27:06.297 15:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:27:06.297 15:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:06.297 15:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:27:06.297 15:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:06.297 15:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:06.297 15:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:27:06.297 15:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:06.297 15:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:06.297 15:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:27:06.297 15:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:06.297 15:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:06.297 15:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:27:06.297 15:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:27:06.297 15:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:27:06.297 15:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:27:06.297 15:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:27:06.297 15:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:27:06.297 15:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:27:06.297 15:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:27:06.297 15:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:27:06.297 15:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:27:06.297 15:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tHwqWDYHaH 00:27:06.297 15:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67202 00:27:06.297 15:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67202 00:27:06.297 15:56:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 67202 ']' 00:27:06.297 15:56:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.297 15:56:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:06.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.297 15:56:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.297 15:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:27:06.297 15:56:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:06.297 15:56:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:06.297 [2024-11-05 15:56:38.543945] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:27:06.297 [2024-11-05 15:56:38.544065] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67202 ] 00:27:06.298 [2024-11-05 15:56:38.698722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.556 [2024-11-05 15:56:38.782220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.556 [2024-11-05 15:56:38.893447] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:06.556 [2024-11-05 15:56:38.893483] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:07.121 15:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:07.121 15:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:27:07.121 15:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:07.122 BaseBdev1_malloc 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:07.122 true 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:07.122 [2024-11-05 15:56:39.418938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:27:07.122 [2024-11-05 15:56:39.418984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:07.122 [2024-11-05 15:56:39.419000] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:27:07.122 [2024-11-05 15:56:39.419009] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:07.122 [2024-11-05 15:56:39.420710] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:07.122 [2024-11-05 15:56:39.420742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:07.122 BaseBdev1 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:07.122 BaseBdev2_malloc 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:07.122 true 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:07.122 [2024-11-05 15:56:39.457821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:27:07.122 [2024-11-05 15:56:39.457868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:07.122 [2024-11-05 15:56:39.457880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:27:07.122 [2024-11-05 15:56:39.457887] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:07.122 [2024-11-05 15:56:39.459568] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:07.122 [2024-11-05 15:56:39.459599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:07.122 BaseBdev2 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:07.122 BaseBdev3_malloc 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:07.122 true 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:07.122 [2024-11-05 15:56:39.514999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:27:07.122 [2024-11-05 15:56:39.515040] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:07.122 [2024-11-05 15:56:39.515054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:27:07.122 [2024-11-05 15:56:39.515063] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:07.122 [2024-11-05 15:56:39.516756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:07.122 [2024-11-05 15:56:39.516788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:07.122 BaseBdev3 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:07.122 [2024-11-05 15:56:39.523049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:07.122 [2024-11-05 15:56:39.524508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:07.122 [2024-11-05 15:56:39.524572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:07.122 [2024-11-05 15:56:39.524732] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:27:07.122 [2024-11-05 15:56:39.524746] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:07.122 [2024-11-05 15:56:39.524957] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:27:07.122 [2024-11-05 15:56:39.525083] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:27:07.122 [2024-11-05 15:56:39.525097] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:27:07.122 [2024-11-05 15:56:39.525205] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.122 15:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:07.380 15:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.380 15:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:07.380 "name": "raid_bdev1", 00:27:07.380 "uuid": "bb7844aa-d9c3-4243-a72b-3555fc48ac32", 00:27:07.380 "strip_size_kb": 0, 00:27:07.380 "state": "online", 00:27:07.380 "raid_level": "raid1", 00:27:07.380 "superblock": true, 00:27:07.380 "num_base_bdevs": 3, 00:27:07.380 "num_base_bdevs_discovered": 3, 00:27:07.380 "num_base_bdevs_operational": 3, 00:27:07.380 "base_bdevs_list": [ 00:27:07.380 { 00:27:07.380 "name": "BaseBdev1", 00:27:07.380 "uuid": "ebf4dade-9c6d-564b-a010-f5ccf762d607", 00:27:07.380 "is_configured": true, 00:27:07.380 "data_offset": 2048, 00:27:07.380 "data_size": 63488 00:27:07.380 }, 00:27:07.380 { 00:27:07.380 "name": "BaseBdev2", 00:27:07.380 "uuid": "94d5ee8a-240e-5b62-b9e7-ccd3afaf762a", 00:27:07.380 "is_configured": true, 00:27:07.380 "data_offset": 2048, 00:27:07.380 "data_size": 63488 00:27:07.380 }, 00:27:07.380 { 00:27:07.380 "name": "BaseBdev3", 00:27:07.380 "uuid": "15432fdb-8c4c-5784-a224-fd7cbf864834", 00:27:07.380 "is_configured": true, 00:27:07.380 "data_offset": 2048, 00:27:07.380 "data_size": 63488 00:27:07.380 } 00:27:07.380 ] 00:27:07.380 }' 00:27:07.380 15:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:07.380 15:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:07.638 15:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:27:07.638 15:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:27:07.638 [2024-11-05 15:56:39.927906] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:27:08.570 15:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:27:08.570 15:56:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.570 15:56:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:08.570 15:56:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.570 15:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:27:08.570 15:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:27:08.570 15:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:27:08.570 15:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:27:08.570 15:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:08.570 15:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:08.570 15:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:08.570 15:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:08.570 15:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:08.570 15:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:08.570 15:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:08.570 15:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:08.570 15:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:08.570 15:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:08.570 15:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:08.570 15:56:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.570 15:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:08.570 15:56:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:08.570 15:56:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.570 15:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:08.570 "name": "raid_bdev1", 00:27:08.570 "uuid": "bb7844aa-d9c3-4243-a72b-3555fc48ac32", 00:27:08.570 "strip_size_kb": 0, 00:27:08.570 "state": "online", 00:27:08.570 "raid_level": "raid1", 00:27:08.570 "superblock": true, 00:27:08.570 "num_base_bdevs": 3, 00:27:08.570 "num_base_bdevs_discovered": 3, 00:27:08.570 "num_base_bdevs_operational": 3, 00:27:08.570 "base_bdevs_list": [ 00:27:08.570 { 00:27:08.570 "name": "BaseBdev1", 00:27:08.570 "uuid": "ebf4dade-9c6d-564b-a010-f5ccf762d607", 00:27:08.570 "is_configured": true, 00:27:08.570 "data_offset": 2048, 00:27:08.570 "data_size": 63488 00:27:08.570 }, 00:27:08.570 { 00:27:08.570 "name": "BaseBdev2", 00:27:08.570 "uuid": "94d5ee8a-240e-5b62-b9e7-ccd3afaf762a", 00:27:08.570 "is_configured": true, 00:27:08.570 "data_offset": 2048, 00:27:08.570 "data_size": 63488 00:27:08.570 }, 00:27:08.570 { 00:27:08.570 "name": "BaseBdev3", 00:27:08.571 "uuid": "15432fdb-8c4c-5784-a224-fd7cbf864834", 00:27:08.571 "is_configured": true, 00:27:08.571 "data_offset": 2048, 00:27:08.571 "data_size": 63488 00:27:08.571 } 00:27:08.571 ] 00:27:08.571 }' 00:27:08.571 15:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:08.571 15:56:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:08.829 15:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:08.829 15:56:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.829 15:56:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:08.829 [2024-11-05 15:56:41.172785] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:08.829 [2024-11-05 15:56:41.172818] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:08.829 [2024-11-05 15:56:41.175164] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:08.829 [2024-11-05 15:56:41.175206] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:08.829 [2024-11-05 15:56:41.175294] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:08.829 [2024-11-05 15:56:41.175302] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:27:08.829 { 00:27:08.829 "results": [ 00:27:08.829 { 00:27:08.829 "job": "raid_bdev1", 00:27:08.829 "core_mask": "0x1", 00:27:08.829 "workload": "randrw", 00:27:08.829 "percentage": 50, 00:27:08.829 "status": "finished", 00:27:08.829 "queue_depth": 1, 00:27:08.829 "io_size": 131072, 00:27:08.829 "runtime": 1.243383, 00:27:08.829 "iops": 17599.565057588854, 00:27:08.829 "mibps": 2199.9456321986067, 00:27:08.829 "io_failed": 0, 00:27:08.829 "io_timeout": 0, 00:27:08.829 "avg_latency_us": 54.35566238632729, 00:27:08.829 "min_latency_us": 22.54769230769231, 00:27:08.829 "max_latency_us": 1373.7353846153846 00:27:08.829 } 00:27:08.829 ], 00:27:08.829 "core_count": 1 00:27:08.829 } 00:27:08.829 15:56:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.829 15:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67202 00:27:08.829 15:56:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 67202 ']' 00:27:08.829 15:56:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 67202 00:27:08.829 15:56:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:27:08.829 15:56:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:08.829 15:56:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67202 00:27:08.829 15:56:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:08.829 15:56:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:08.829 killing process with pid 67202 00:27:08.829 15:56:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67202' 00:27:08.829 15:56:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 67202 00:27:08.829 [2024-11-05 15:56:41.206067] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:08.829 15:56:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 67202 00:27:09.087 [2024-11-05 15:56:41.319223] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:09.652 15:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tHwqWDYHaH 00:27:09.652 15:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:27:09.652 15:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:27:09.652 15:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:27:09.652 15:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:27:09.652 15:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:09.652 15:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:27:09.652 15:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:27:09.652 00:27:09.652 real 0m3.445s 00:27:09.652 user 0m4.132s 00:27:09.652 sys 0m0.373s 00:27:09.652 15:56:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:09.652 ************************************ 00:27:09.652 END TEST raid_read_error_test 00:27:09.652 ************************************ 00:27:09.652 15:56:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:09.652 15:56:41 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:27:09.652 15:56:41 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:27:09.652 15:56:41 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:09.652 15:56:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:09.652 ************************************ 00:27:09.652 START TEST raid_write_error_test 00:27:09.652 ************************************ 00:27:09.652 15:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 write 00:27:09.652 15:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:27:09.652 15:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:27:09.652 15:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:27:09.652 15:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:27:09.652 15:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:09.652 15:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:27:09.652 15:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:09.652 15:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:09.652 15:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:27:09.652 15:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:09.652 15:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:09.652 15:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:27:09.652 15:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:09.652 15:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:09.652 15:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:27:09.652 15:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:27:09.652 15:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:27:09.652 15:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:27:09.652 15:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:27:09.652 15:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:27:09.652 15:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:27:09.652 15:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:27:09.652 15:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:27:09.652 15:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:27:09.652 15:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.NvqJFqsl7U 00:27:09.652 15:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67337 00:27:09.652 15:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67337 00:27:09.652 15:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 67337 ']' 00:27:09.652 15:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:09.652 15:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:09.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:09.652 15:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:09.652 15:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:09.652 15:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:27:09.652 15:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:09.652 [2024-11-05 15:56:42.026521] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:27:09.652 [2024-11-05 15:56:42.026633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67337 ] 00:27:09.910 [2024-11-05 15:56:42.172184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.910 [2024-11-05 15:56:42.255860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.167 [2024-11-05 15:56:42.365392] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:10.167 [2024-11-05 15:56:42.365424] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:10.732 15:56:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:10.732 15:56:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:27:10.732 15:56:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:10.732 15:56:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:10.732 15:56:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.732 15:56:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.732 BaseBdev1_malloc 00:27:10.732 15:56:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.732 15:56:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:27:10.732 15:56:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.732 15:56:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.732 true 00:27:10.732 15:56:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.732 15:56:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:27:10.732 15:56:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.732 15:56:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.732 [2024-11-05 15:56:42.907203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:27:10.732 [2024-11-05 15:56:42.907252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:10.732 [2024-11-05 15:56:42.907269] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:27:10.732 [2024-11-05 15:56:42.907278] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:10.732 [2024-11-05 15:56:42.909042] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:10.732 [2024-11-05 15:56:42.909075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:10.732 BaseBdev1 00:27:10.732 15:56:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.732 15:56:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:10.732 15:56:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:10.732 15:56:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.732 15:56:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.732 BaseBdev2_malloc 00:27:10.732 15:56:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.732 15:56:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:27:10.732 15:56:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.732 15:56:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.732 true 00:27:10.732 15:56:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.732 15:56:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:27:10.732 15:56:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.732 15:56:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.732 [2024-11-05 15:56:42.946757] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:27:10.732 [2024-11-05 15:56:42.946804] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:10.732 [2024-11-05 15:56:42.946818] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:27:10.732 [2024-11-05 15:56:42.946827] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:10.732 [2024-11-05 15:56:42.948570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:10.732 [2024-11-05 15:56:42.948607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:10.732 BaseBdev2 00:27:10.732 15:56:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.732 15:56:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:10.732 15:56:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:10.732 15:56:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.732 15:56:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.732 BaseBdev3_malloc 00:27:10.732 15:56:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.732 15:56:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:27:10.732 15:56:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.732 15:56:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.732 true 00:27:10.732 15:56:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.732 15:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:27:10.732 15:56:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.732 15:56:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.732 [2024-11-05 15:56:43.006400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:27:10.732 [2024-11-05 15:56:43.006451] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:10.732 [2024-11-05 15:56:43.006468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:27:10.732 [2024-11-05 15:56:43.006477] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:10.732 [2024-11-05 15:56:43.008260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:10.732 [2024-11-05 15:56:43.008390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:10.732 BaseBdev3 00:27:10.732 15:56:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.733 15:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:27:10.733 15:56:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.733 15:56:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.733 [2024-11-05 15:56:43.014457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:10.733 [2024-11-05 15:56:43.016027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:10.733 [2024-11-05 15:56:43.016090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:10.733 [2024-11-05 15:56:43.016262] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:27:10.733 [2024-11-05 15:56:43.016271] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:10.733 [2024-11-05 15:56:43.016491] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:27:10.733 [2024-11-05 15:56:43.016620] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:27:10.733 [2024-11-05 15:56:43.016629] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:27:10.733 [2024-11-05 15:56:43.016757] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:10.733 15:56:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.733 15:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:10.733 15:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:10.733 15:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:10.733 15:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:10.733 15:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:10.733 15:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:10.733 15:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:10.733 15:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:10.733 15:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:10.733 15:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:10.733 15:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:10.733 15:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:10.733 15:56:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.733 15:56:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.733 15:56:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.733 15:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:10.733 "name": "raid_bdev1", 00:27:10.733 "uuid": "ce454776-b008-4d71-9d3b-d4172f3f93fe", 00:27:10.733 "strip_size_kb": 0, 00:27:10.733 "state": "online", 00:27:10.733 "raid_level": "raid1", 00:27:10.733 "superblock": true, 00:27:10.733 "num_base_bdevs": 3, 00:27:10.733 "num_base_bdevs_discovered": 3, 00:27:10.733 "num_base_bdevs_operational": 3, 00:27:10.733 "base_bdevs_list": [ 00:27:10.733 { 00:27:10.733 "name": "BaseBdev1", 00:27:10.733 "uuid": "5da5a8dd-ad59-5890-87ad-c1a3c29d074d", 00:27:10.733 "is_configured": true, 00:27:10.733 "data_offset": 2048, 00:27:10.733 "data_size": 63488 00:27:10.733 }, 00:27:10.733 { 00:27:10.733 "name": "BaseBdev2", 00:27:10.733 "uuid": "6f295660-666a-5816-8558-8ab4b617a897", 00:27:10.733 "is_configured": true, 00:27:10.733 "data_offset": 2048, 00:27:10.733 "data_size": 63488 00:27:10.733 }, 00:27:10.733 { 00:27:10.733 "name": "BaseBdev3", 00:27:10.733 "uuid": "34fcec04-df8b-535e-a2c9-ce55082e410a", 00:27:10.733 "is_configured": true, 00:27:10.733 "data_offset": 2048, 00:27:10.733 "data_size": 63488 00:27:10.733 } 00:27:10.733 ] 00:27:10.733 }' 00:27:10.733 15:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:10.733 15:56:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.991 15:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:27:10.991 15:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:27:11.249 [2024-11-05 15:56:43.431303] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:27:12.180 15:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:27:12.180 15:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.180 15:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.180 [2024-11-05 15:56:44.346197] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:27:12.180 [2024-11-05 15:56:44.346360] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:12.180 [2024-11-05 15:56:44.346566] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:27:12.180 15:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.180 15:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:27:12.180 15:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:27:12.180 15:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:27:12.180 15:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:27:12.180 15:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:12.180 15:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:12.180 15:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:12.180 15:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:12.180 15:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:12.180 15:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:12.180 15:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:12.180 15:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:12.180 15:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:12.180 15:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:12.180 15:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:12.180 15:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:12.180 15:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.180 15:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.180 15:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.180 15:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:12.180 "name": "raid_bdev1", 00:27:12.180 "uuid": "ce454776-b008-4d71-9d3b-d4172f3f93fe", 00:27:12.180 "strip_size_kb": 0, 00:27:12.180 "state": "online", 00:27:12.180 "raid_level": "raid1", 00:27:12.180 "superblock": true, 00:27:12.180 "num_base_bdevs": 3, 00:27:12.180 "num_base_bdevs_discovered": 2, 00:27:12.180 "num_base_bdevs_operational": 2, 00:27:12.180 "base_bdevs_list": [ 00:27:12.180 { 00:27:12.180 "name": null, 00:27:12.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:12.180 "is_configured": false, 00:27:12.180 "data_offset": 0, 00:27:12.180 "data_size": 63488 00:27:12.180 }, 00:27:12.180 { 00:27:12.180 "name": "BaseBdev2", 00:27:12.180 "uuid": "6f295660-666a-5816-8558-8ab4b617a897", 00:27:12.180 "is_configured": true, 00:27:12.180 "data_offset": 2048, 00:27:12.180 "data_size": 63488 00:27:12.180 }, 00:27:12.180 { 00:27:12.180 "name": "BaseBdev3", 00:27:12.180 "uuid": "34fcec04-df8b-535e-a2c9-ce55082e410a", 00:27:12.180 "is_configured": true, 00:27:12.180 "data_offset": 2048, 00:27:12.180 "data_size": 63488 00:27:12.181 } 00:27:12.181 ] 00:27:12.181 }' 00:27:12.181 15:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:12.181 15:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.438 15:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:12.438 15:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.438 15:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.438 [2024-11-05 15:56:44.671974] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:12.438 [2024-11-05 15:56:44.671999] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:12.438 [2024-11-05 15:56:44.674367] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:12.438 [2024-11-05 15:56:44.674409] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:12.438 [2024-11-05 15:56:44.674478] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:12.438 [2024-11-05 15:56:44.674487] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:27:12.438 { 00:27:12.438 "results": [ 00:27:12.438 { 00:27:12.438 "job": "raid_bdev1", 00:27:12.438 "core_mask": "0x1", 00:27:12.438 "workload": "randrw", 00:27:12.438 "percentage": 50, 00:27:12.438 "status": "finished", 00:27:12.438 "queue_depth": 1, 00:27:12.438 "io_size": 131072, 00:27:12.438 "runtime": 1.239026, 00:27:12.438 "iops": 18740.5268331738, 00:27:12.438 "mibps": 2342.565854146725, 00:27:12.438 "io_failed": 0, 00:27:12.438 "io_timeout": 0, 00:27:12.438 "avg_latency_us": 50.90741827337176, 00:27:12.438 "min_latency_us": 22.744615384615386, 00:27:12.438 "max_latency_us": 1329.6246153846155 00:27:12.438 } 00:27:12.438 ], 00:27:12.438 "core_count": 1 00:27:12.438 } 00:27:12.438 15:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.438 15:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67337 00:27:12.438 15:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 67337 ']' 00:27:12.438 15:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 67337 00:27:12.438 15:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:27:12.438 15:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:12.438 15:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67337 00:27:12.438 killing process with pid 67337 00:27:12.438 15:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:12.438 15:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:12.438 15:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67337' 00:27:12.438 15:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 67337 00:27:12.438 [2024-11-05 15:56:44.697115] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:12.438 15:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 67337 00:27:12.439 [2024-11-05 15:56:44.809289] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:13.003 15:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:27:13.003 15:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.NvqJFqsl7U 00:27:13.003 15:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:27:13.003 ************************************ 00:27:13.003 END TEST raid_write_error_test 00:27:13.003 ************************************ 00:27:13.003 15:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:27:13.003 15:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:27:13.003 15:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:13.003 15:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:27:13.003 15:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:27:13.003 00:27:13.003 real 0m3.436s 00:27:13.003 user 0m4.167s 00:27:13.003 sys 0m0.354s 00:27:13.003 15:56:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:13.003 15:56:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.261 15:56:45 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:27:13.261 15:56:45 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:27:13.261 15:56:45 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:27:13.261 15:56:45 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:27:13.261 15:56:45 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:13.261 15:56:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:13.261 ************************************ 00:27:13.261 START TEST raid_state_function_test 00:27:13.261 ************************************ 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 false 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67464 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67464' 00:27:13.261 Process raid pid: 67464 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67464 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 67464 ']' 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:13.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:13.261 15:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.261 [2024-11-05 15:56:45.500421] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:27:13.261 [2024-11-05 15:56:45.500587] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:13.261 [2024-11-05 15:56:45.656699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.519 [2024-11-05 15:56:45.740055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.519 [2024-11-05 15:56:45.848896] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:13.519 [2024-11-05 15:56:45.849035] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:14.084 15:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:14.084 15:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:27:14.084 15:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:14.084 15:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.084 15:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.084 [2024-11-05 15:56:46.345858] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:14.084 [2024-11-05 15:56:46.345898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:14.084 [2024-11-05 15:56:46.345906] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:14.084 [2024-11-05 15:56:46.345914] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:14.084 [2024-11-05 15:56:46.345919] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:14.084 [2024-11-05 15:56:46.345926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:14.084 [2024-11-05 15:56:46.345930] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:14.084 [2024-11-05 15:56:46.345937] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:14.084 15:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.084 15:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:27:14.084 15:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:14.084 15:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:14.084 15:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:14.084 15:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:14.084 15:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:14.084 15:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:14.085 15:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:14.085 15:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:14.085 15:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:14.085 15:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:14.085 15:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:14.085 15:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.085 15:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.085 15:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.085 15:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:14.085 "name": "Existed_Raid", 00:27:14.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:14.085 "strip_size_kb": 64, 00:27:14.085 "state": "configuring", 00:27:14.085 "raid_level": "raid0", 00:27:14.085 "superblock": false, 00:27:14.085 "num_base_bdevs": 4, 00:27:14.085 "num_base_bdevs_discovered": 0, 00:27:14.085 "num_base_bdevs_operational": 4, 00:27:14.085 "base_bdevs_list": [ 00:27:14.085 { 00:27:14.085 "name": "BaseBdev1", 00:27:14.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:14.085 "is_configured": false, 00:27:14.085 "data_offset": 0, 00:27:14.085 "data_size": 0 00:27:14.085 }, 00:27:14.085 { 00:27:14.085 "name": "BaseBdev2", 00:27:14.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:14.085 "is_configured": false, 00:27:14.085 "data_offset": 0, 00:27:14.085 "data_size": 0 00:27:14.085 }, 00:27:14.085 { 00:27:14.085 "name": "BaseBdev3", 00:27:14.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:14.085 "is_configured": false, 00:27:14.085 "data_offset": 0, 00:27:14.085 "data_size": 0 00:27:14.085 }, 00:27:14.085 { 00:27:14.085 "name": "BaseBdev4", 00:27:14.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:14.085 "is_configured": false, 00:27:14.085 "data_offset": 0, 00:27:14.085 "data_size": 0 00:27:14.085 } 00:27:14.085 ] 00:27:14.085 }' 00:27:14.085 15:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:14.085 15:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.344 [2024-11-05 15:56:46.665886] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:14.344 [2024-11-05 15:56:46.665916] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.344 [2024-11-05 15:56:46.673897] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:14.344 [2024-11-05 15:56:46.673989] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:14.344 [2024-11-05 15:56:46.674035] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:14.344 [2024-11-05 15:56:46.674057] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:14.344 [2024-11-05 15:56:46.674072] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:14.344 [2024-11-05 15:56:46.674088] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:14.344 [2024-11-05 15:56:46.674101] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:14.344 [2024-11-05 15:56:46.674118] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.344 [2024-11-05 15:56:46.700983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:14.344 BaseBdev1 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.344 [ 00:27:14.344 { 00:27:14.344 "name": "BaseBdev1", 00:27:14.344 "aliases": [ 00:27:14.344 "6b99a578-ec1a-4e98-9616-4f695c8d3579" 00:27:14.344 ], 00:27:14.344 "product_name": "Malloc disk", 00:27:14.344 "block_size": 512, 00:27:14.344 "num_blocks": 65536, 00:27:14.344 "uuid": "6b99a578-ec1a-4e98-9616-4f695c8d3579", 00:27:14.344 "assigned_rate_limits": { 00:27:14.344 "rw_ios_per_sec": 0, 00:27:14.344 "rw_mbytes_per_sec": 0, 00:27:14.344 "r_mbytes_per_sec": 0, 00:27:14.344 "w_mbytes_per_sec": 0 00:27:14.344 }, 00:27:14.344 "claimed": true, 00:27:14.344 "claim_type": "exclusive_write", 00:27:14.344 "zoned": false, 00:27:14.344 "supported_io_types": { 00:27:14.344 "read": true, 00:27:14.344 "write": true, 00:27:14.344 "unmap": true, 00:27:14.344 "flush": true, 00:27:14.344 "reset": true, 00:27:14.344 "nvme_admin": false, 00:27:14.344 "nvme_io": false, 00:27:14.344 "nvme_io_md": false, 00:27:14.344 "write_zeroes": true, 00:27:14.344 "zcopy": true, 00:27:14.344 "get_zone_info": false, 00:27:14.344 "zone_management": false, 00:27:14.344 "zone_append": false, 00:27:14.344 "compare": false, 00:27:14.344 "compare_and_write": false, 00:27:14.344 "abort": true, 00:27:14.344 "seek_hole": false, 00:27:14.344 "seek_data": false, 00:27:14.344 "copy": true, 00:27:14.344 "nvme_iov_md": false 00:27:14.344 }, 00:27:14.344 "memory_domains": [ 00:27:14.344 { 00:27:14.344 "dma_device_id": "system", 00:27:14.344 "dma_device_type": 1 00:27:14.344 }, 00:27:14.344 { 00:27:14.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:14.344 "dma_device_type": 2 00:27:14.344 } 00:27:14.344 ], 00:27:14.344 "driver_specific": {} 00:27:14.344 } 00:27:14.344 ] 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.344 15:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.601 15:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:14.601 "name": "Existed_Raid", 00:27:14.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:14.601 "strip_size_kb": 64, 00:27:14.601 "state": "configuring", 00:27:14.601 "raid_level": "raid0", 00:27:14.601 "superblock": false, 00:27:14.601 "num_base_bdevs": 4, 00:27:14.601 "num_base_bdevs_discovered": 1, 00:27:14.601 "num_base_bdevs_operational": 4, 00:27:14.601 "base_bdevs_list": [ 00:27:14.601 { 00:27:14.601 "name": "BaseBdev1", 00:27:14.601 "uuid": "6b99a578-ec1a-4e98-9616-4f695c8d3579", 00:27:14.601 "is_configured": true, 00:27:14.601 "data_offset": 0, 00:27:14.601 "data_size": 65536 00:27:14.601 }, 00:27:14.601 { 00:27:14.602 "name": "BaseBdev2", 00:27:14.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:14.602 "is_configured": false, 00:27:14.602 "data_offset": 0, 00:27:14.602 "data_size": 0 00:27:14.602 }, 00:27:14.602 { 00:27:14.602 "name": "BaseBdev3", 00:27:14.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:14.602 "is_configured": false, 00:27:14.602 "data_offset": 0, 00:27:14.602 "data_size": 0 00:27:14.602 }, 00:27:14.602 { 00:27:14.602 "name": "BaseBdev4", 00:27:14.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:14.602 "is_configured": false, 00:27:14.602 "data_offset": 0, 00:27:14.602 "data_size": 0 00:27:14.602 } 00:27:14.602 ] 00:27:14.602 }' 00:27:14.602 15:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:14.602 15:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.860 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:14.860 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.860 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.860 [2024-11-05 15:56:47.085092] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:14.860 [2024-11-05 15:56:47.085259] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:27:14.860 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.860 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:14.860 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.860 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.860 [2024-11-05 15:56:47.093130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:14.860 [2024-11-05 15:56:47.094705] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:14.860 [2024-11-05 15:56:47.094804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:14.860 [2024-11-05 15:56:47.094816] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:14.860 [2024-11-05 15:56:47.094825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:14.860 [2024-11-05 15:56:47.094830] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:14.860 [2024-11-05 15:56:47.094837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:14.860 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.860 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:27:14.860 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:14.860 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:27:14.860 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:14.860 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:14.860 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:14.860 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:14.860 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:14.861 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:14.861 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:14.861 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:14.861 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:14.861 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:14.861 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:14.861 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.861 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.861 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.861 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:14.861 "name": "Existed_Raid", 00:27:14.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:14.861 "strip_size_kb": 64, 00:27:14.861 "state": "configuring", 00:27:14.861 "raid_level": "raid0", 00:27:14.861 "superblock": false, 00:27:14.861 "num_base_bdevs": 4, 00:27:14.861 "num_base_bdevs_discovered": 1, 00:27:14.861 "num_base_bdevs_operational": 4, 00:27:14.861 "base_bdevs_list": [ 00:27:14.861 { 00:27:14.861 "name": "BaseBdev1", 00:27:14.861 "uuid": "6b99a578-ec1a-4e98-9616-4f695c8d3579", 00:27:14.861 "is_configured": true, 00:27:14.861 "data_offset": 0, 00:27:14.861 "data_size": 65536 00:27:14.861 }, 00:27:14.861 { 00:27:14.861 "name": "BaseBdev2", 00:27:14.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:14.861 "is_configured": false, 00:27:14.861 "data_offset": 0, 00:27:14.861 "data_size": 0 00:27:14.861 }, 00:27:14.861 { 00:27:14.861 "name": "BaseBdev3", 00:27:14.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:14.861 "is_configured": false, 00:27:14.861 "data_offset": 0, 00:27:14.861 "data_size": 0 00:27:14.861 }, 00:27:14.861 { 00:27:14.861 "name": "BaseBdev4", 00:27:14.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:14.861 "is_configured": false, 00:27:14.861 "data_offset": 0, 00:27:14.861 "data_size": 0 00:27:14.861 } 00:27:14.861 ] 00:27:14.861 }' 00:27:14.861 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:14.861 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:15.119 [2024-11-05 15:56:47.403364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:15.119 BaseBdev2 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:15.119 [ 00:27:15.119 { 00:27:15.119 "name": "BaseBdev2", 00:27:15.119 "aliases": [ 00:27:15.119 "5a5a0c86-033d-4a51-bf21-67e2d59a2383" 00:27:15.119 ], 00:27:15.119 "product_name": "Malloc disk", 00:27:15.119 "block_size": 512, 00:27:15.119 "num_blocks": 65536, 00:27:15.119 "uuid": "5a5a0c86-033d-4a51-bf21-67e2d59a2383", 00:27:15.119 "assigned_rate_limits": { 00:27:15.119 "rw_ios_per_sec": 0, 00:27:15.119 "rw_mbytes_per_sec": 0, 00:27:15.119 "r_mbytes_per_sec": 0, 00:27:15.119 "w_mbytes_per_sec": 0 00:27:15.119 }, 00:27:15.119 "claimed": true, 00:27:15.119 "claim_type": "exclusive_write", 00:27:15.119 "zoned": false, 00:27:15.119 "supported_io_types": { 00:27:15.119 "read": true, 00:27:15.119 "write": true, 00:27:15.119 "unmap": true, 00:27:15.119 "flush": true, 00:27:15.119 "reset": true, 00:27:15.119 "nvme_admin": false, 00:27:15.119 "nvme_io": false, 00:27:15.119 "nvme_io_md": false, 00:27:15.119 "write_zeroes": true, 00:27:15.119 "zcopy": true, 00:27:15.119 "get_zone_info": false, 00:27:15.119 "zone_management": false, 00:27:15.119 "zone_append": false, 00:27:15.119 "compare": false, 00:27:15.119 "compare_and_write": false, 00:27:15.119 "abort": true, 00:27:15.119 "seek_hole": false, 00:27:15.119 "seek_data": false, 00:27:15.119 "copy": true, 00:27:15.119 "nvme_iov_md": false 00:27:15.119 }, 00:27:15.119 "memory_domains": [ 00:27:15.119 { 00:27:15.119 "dma_device_id": "system", 00:27:15.119 "dma_device_type": 1 00:27:15.119 }, 00:27:15.119 { 00:27:15.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:15.119 "dma_device_type": 2 00:27:15.119 } 00:27:15.119 ], 00:27:15.119 "driver_specific": {} 00:27:15.119 } 00:27:15.119 ] 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:15.119 "name": "Existed_Raid", 00:27:15.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:15.119 "strip_size_kb": 64, 00:27:15.119 "state": "configuring", 00:27:15.119 "raid_level": "raid0", 00:27:15.119 "superblock": false, 00:27:15.119 "num_base_bdevs": 4, 00:27:15.119 "num_base_bdevs_discovered": 2, 00:27:15.119 "num_base_bdevs_operational": 4, 00:27:15.119 "base_bdevs_list": [ 00:27:15.119 { 00:27:15.119 "name": "BaseBdev1", 00:27:15.119 "uuid": "6b99a578-ec1a-4e98-9616-4f695c8d3579", 00:27:15.119 "is_configured": true, 00:27:15.119 "data_offset": 0, 00:27:15.119 "data_size": 65536 00:27:15.119 }, 00:27:15.119 { 00:27:15.119 "name": "BaseBdev2", 00:27:15.119 "uuid": "5a5a0c86-033d-4a51-bf21-67e2d59a2383", 00:27:15.119 "is_configured": true, 00:27:15.119 "data_offset": 0, 00:27:15.119 "data_size": 65536 00:27:15.119 }, 00:27:15.119 { 00:27:15.119 "name": "BaseBdev3", 00:27:15.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:15.119 "is_configured": false, 00:27:15.119 "data_offset": 0, 00:27:15.119 "data_size": 0 00:27:15.119 }, 00:27:15.119 { 00:27:15.119 "name": "BaseBdev4", 00:27:15.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:15.119 "is_configured": false, 00:27:15.119 "data_offset": 0, 00:27:15.119 "data_size": 0 00:27:15.119 } 00:27:15.119 ] 00:27:15.119 }' 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:15.119 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:15.377 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:27:15.377 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.377 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:15.377 [2024-11-05 15:56:47.782142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:15.377 BaseBdev3 00:27:15.377 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.377 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:27:15.377 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:27:15.377 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:15.377 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:27:15.377 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:15.377 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:15.377 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:15.377 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.377 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:15.636 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.636 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:15.636 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.636 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:15.636 [ 00:27:15.636 { 00:27:15.636 "name": "BaseBdev3", 00:27:15.636 "aliases": [ 00:27:15.636 "bf09e1fb-4d74-47a6-8297-dbc2207017e3" 00:27:15.636 ], 00:27:15.636 "product_name": "Malloc disk", 00:27:15.636 "block_size": 512, 00:27:15.636 "num_blocks": 65536, 00:27:15.636 "uuid": "bf09e1fb-4d74-47a6-8297-dbc2207017e3", 00:27:15.636 "assigned_rate_limits": { 00:27:15.636 "rw_ios_per_sec": 0, 00:27:15.636 "rw_mbytes_per_sec": 0, 00:27:15.636 "r_mbytes_per_sec": 0, 00:27:15.636 "w_mbytes_per_sec": 0 00:27:15.636 }, 00:27:15.636 "claimed": true, 00:27:15.636 "claim_type": "exclusive_write", 00:27:15.636 "zoned": false, 00:27:15.636 "supported_io_types": { 00:27:15.636 "read": true, 00:27:15.636 "write": true, 00:27:15.636 "unmap": true, 00:27:15.636 "flush": true, 00:27:15.636 "reset": true, 00:27:15.636 "nvme_admin": false, 00:27:15.636 "nvme_io": false, 00:27:15.636 "nvme_io_md": false, 00:27:15.636 "write_zeroes": true, 00:27:15.636 "zcopy": true, 00:27:15.636 "get_zone_info": false, 00:27:15.636 "zone_management": false, 00:27:15.636 "zone_append": false, 00:27:15.636 "compare": false, 00:27:15.636 "compare_and_write": false, 00:27:15.636 "abort": true, 00:27:15.636 "seek_hole": false, 00:27:15.636 "seek_data": false, 00:27:15.636 "copy": true, 00:27:15.636 "nvme_iov_md": false 00:27:15.636 }, 00:27:15.636 "memory_domains": [ 00:27:15.636 { 00:27:15.636 "dma_device_id": "system", 00:27:15.636 "dma_device_type": 1 00:27:15.636 }, 00:27:15.636 { 00:27:15.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:15.636 "dma_device_type": 2 00:27:15.636 } 00:27:15.636 ], 00:27:15.636 "driver_specific": {} 00:27:15.636 } 00:27:15.636 ] 00:27:15.636 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.636 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:27:15.636 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:15.636 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:15.636 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:27:15.636 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:15.636 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:15.636 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:15.636 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:15.636 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:15.636 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:15.636 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:15.636 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:15.636 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:15.636 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:15.636 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:15.636 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.636 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:15.636 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.636 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:15.636 "name": "Existed_Raid", 00:27:15.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:15.636 "strip_size_kb": 64, 00:27:15.636 "state": "configuring", 00:27:15.636 "raid_level": "raid0", 00:27:15.636 "superblock": false, 00:27:15.636 "num_base_bdevs": 4, 00:27:15.636 "num_base_bdevs_discovered": 3, 00:27:15.636 "num_base_bdevs_operational": 4, 00:27:15.636 "base_bdevs_list": [ 00:27:15.636 { 00:27:15.636 "name": "BaseBdev1", 00:27:15.636 "uuid": "6b99a578-ec1a-4e98-9616-4f695c8d3579", 00:27:15.636 "is_configured": true, 00:27:15.636 "data_offset": 0, 00:27:15.636 "data_size": 65536 00:27:15.636 }, 00:27:15.636 { 00:27:15.636 "name": "BaseBdev2", 00:27:15.636 "uuid": "5a5a0c86-033d-4a51-bf21-67e2d59a2383", 00:27:15.636 "is_configured": true, 00:27:15.636 "data_offset": 0, 00:27:15.636 "data_size": 65536 00:27:15.636 }, 00:27:15.636 { 00:27:15.636 "name": "BaseBdev3", 00:27:15.636 "uuid": "bf09e1fb-4d74-47a6-8297-dbc2207017e3", 00:27:15.636 "is_configured": true, 00:27:15.636 "data_offset": 0, 00:27:15.636 "data_size": 65536 00:27:15.636 }, 00:27:15.636 { 00:27:15.636 "name": "BaseBdev4", 00:27:15.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:15.636 "is_configured": false, 00:27:15.636 "data_offset": 0, 00:27:15.636 "data_size": 0 00:27:15.636 } 00:27:15.636 ] 00:27:15.636 }' 00:27:15.636 15:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:15.636 15:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:15.894 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:27:15.894 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.894 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:15.894 [2024-11-05 15:56:48.144313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:15.894 [2024-11-05 15:56:48.144457] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:15.894 [2024-11-05 15:56:48.144483] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:27:15.894 [2024-11-05 15:56:48.144751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:15.894 [2024-11-05 15:56:48.144959] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:15.894 [2024-11-05 15:56:48.145085] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:27:15.894 [2024-11-05 15:56:48.145318] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:15.894 BaseBdev4 00:27:15.894 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.894 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:27:15.894 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:27:15.894 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:15.894 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:27:15.894 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:15.894 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:15.894 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:15.894 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.894 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:15.894 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.894 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:15.894 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.894 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:15.894 [ 00:27:15.894 { 00:27:15.894 "name": "BaseBdev4", 00:27:15.894 "aliases": [ 00:27:15.894 "b45515eb-b901-400f-9fa7-4cba7a06ca5e" 00:27:15.894 ], 00:27:15.894 "product_name": "Malloc disk", 00:27:15.894 "block_size": 512, 00:27:15.894 "num_blocks": 65536, 00:27:15.894 "uuid": "b45515eb-b901-400f-9fa7-4cba7a06ca5e", 00:27:15.894 "assigned_rate_limits": { 00:27:15.894 "rw_ios_per_sec": 0, 00:27:15.894 "rw_mbytes_per_sec": 0, 00:27:15.894 "r_mbytes_per_sec": 0, 00:27:15.894 "w_mbytes_per_sec": 0 00:27:15.894 }, 00:27:15.894 "claimed": true, 00:27:15.894 "claim_type": "exclusive_write", 00:27:15.894 "zoned": false, 00:27:15.894 "supported_io_types": { 00:27:15.894 "read": true, 00:27:15.894 "write": true, 00:27:15.894 "unmap": true, 00:27:15.894 "flush": true, 00:27:15.894 "reset": true, 00:27:15.894 "nvme_admin": false, 00:27:15.894 "nvme_io": false, 00:27:15.894 "nvme_io_md": false, 00:27:15.894 "write_zeroes": true, 00:27:15.894 "zcopy": true, 00:27:15.894 "get_zone_info": false, 00:27:15.894 "zone_management": false, 00:27:15.894 "zone_append": false, 00:27:15.894 "compare": false, 00:27:15.894 "compare_and_write": false, 00:27:15.894 "abort": true, 00:27:15.894 "seek_hole": false, 00:27:15.894 "seek_data": false, 00:27:15.894 "copy": true, 00:27:15.894 "nvme_iov_md": false 00:27:15.894 }, 00:27:15.894 "memory_domains": [ 00:27:15.894 { 00:27:15.894 "dma_device_id": "system", 00:27:15.894 "dma_device_type": 1 00:27:15.894 }, 00:27:15.894 { 00:27:15.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:15.894 "dma_device_type": 2 00:27:15.894 } 00:27:15.894 ], 00:27:15.894 "driver_specific": {} 00:27:15.894 } 00:27:15.894 ] 00:27:15.894 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.894 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:27:15.894 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:15.894 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:15.894 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:27:15.894 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:15.894 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:15.894 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:15.894 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:15.894 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:15.894 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:15.894 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:15.894 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:15.894 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:15.894 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:15.894 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.895 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:15.895 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:15.895 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.895 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:15.895 "name": "Existed_Raid", 00:27:15.895 "uuid": "dab85c7a-9590-4369-9bca-26fd0b7d6ac0", 00:27:15.895 "strip_size_kb": 64, 00:27:15.895 "state": "online", 00:27:15.895 "raid_level": "raid0", 00:27:15.895 "superblock": false, 00:27:15.895 "num_base_bdevs": 4, 00:27:15.895 "num_base_bdevs_discovered": 4, 00:27:15.895 "num_base_bdevs_operational": 4, 00:27:15.895 "base_bdevs_list": [ 00:27:15.895 { 00:27:15.895 "name": "BaseBdev1", 00:27:15.895 "uuid": "6b99a578-ec1a-4e98-9616-4f695c8d3579", 00:27:15.895 "is_configured": true, 00:27:15.895 "data_offset": 0, 00:27:15.895 "data_size": 65536 00:27:15.895 }, 00:27:15.895 { 00:27:15.895 "name": "BaseBdev2", 00:27:15.895 "uuid": "5a5a0c86-033d-4a51-bf21-67e2d59a2383", 00:27:15.895 "is_configured": true, 00:27:15.895 "data_offset": 0, 00:27:15.895 "data_size": 65536 00:27:15.895 }, 00:27:15.895 { 00:27:15.895 "name": "BaseBdev3", 00:27:15.895 "uuid": "bf09e1fb-4d74-47a6-8297-dbc2207017e3", 00:27:15.895 "is_configured": true, 00:27:15.895 "data_offset": 0, 00:27:15.895 "data_size": 65536 00:27:15.895 }, 00:27:15.895 { 00:27:15.895 "name": "BaseBdev4", 00:27:15.895 "uuid": "b45515eb-b901-400f-9fa7-4cba7a06ca5e", 00:27:15.895 "is_configured": true, 00:27:15.895 "data_offset": 0, 00:27:15.895 "data_size": 65536 00:27:15.895 } 00:27:15.895 ] 00:27:15.895 }' 00:27:15.895 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:15.895 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:16.153 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:27:16.153 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:27:16.153 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:16.153 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:16.153 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:27:16.153 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:16.153 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:27:16.153 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:16.153 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.153 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:16.153 [2024-11-05 15:56:48.496709] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:16.153 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.153 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:16.153 "name": "Existed_Raid", 00:27:16.153 "aliases": [ 00:27:16.153 "dab85c7a-9590-4369-9bca-26fd0b7d6ac0" 00:27:16.153 ], 00:27:16.153 "product_name": "Raid Volume", 00:27:16.153 "block_size": 512, 00:27:16.153 "num_blocks": 262144, 00:27:16.153 "uuid": "dab85c7a-9590-4369-9bca-26fd0b7d6ac0", 00:27:16.153 "assigned_rate_limits": { 00:27:16.153 "rw_ios_per_sec": 0, 00:27:16.153 "rw_mbytes_per_sec": 0, 00:27:16.153 "r_mbytes_per_sec": 0, 00:27:16.153 "w_mbytes_per_sec": 0 00:27:16.153 }, 00:27:16.153 "claimed": false, 00:27:16.153 "zoned": false, 00:27:16.153 "supported_io_types": { 00:27:16.153 "read": true, 00:27:16.153 "write": true, 00:27:16.153 "unmap": true, 00:27:16.153 "flush": true, 00:27:16.153 "reset": true, 00:27:16.153 "nvme_admin": false, 00:27:16.153 "nvme_io": false, 00:27:16.153 "nvme_io_md": false, 00:27:16.153 "write_zeroes": true, 00:27:16.153 "zcopy": false, 00:27:16.153 "get_zone_info": false, 00:27:16.153 "zone_management": false, 00:27:16.153 "zone_append": false, 00:27:16.153 "compare": false, 00:27:16.153 "compare_and_write": false, 00:27:16.153 "abort": false, 00:27:16.153 "seek_hole": false, 00:27:16.153 "seek_data": false, 00:27:16.153 "copy": false, 00:27:16.153 "nvme_iov_md": false 00:27:16.153 }, 00:27:16.153 "memory_domains": [ 00:27:16.153 { 00:27:16.153 "dma_device_id": "system", 00:27:16.153 "dma_device_type": 1 00:27:16.153 }, 00:27:16.153 { 00:27:16.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:16.153 "dma_device_type": 2 00:27:16.153 }, 00:27:16.153 { 00:27:16.153 "dma_device_id": "system", 00:27:16.153 "dma_device_type": 1 00:27:16.153 }, 00:27:16.153 { 00:27:16.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:16.153 "dma_device_type": 2 00:27:16.153 }, 00:27:16.153 { 00:27:16.153 "dma_device_id": "system", 00:27:16.153 "dma_device_type": 1 00:27:16.153 }, 00:27:16.153 { 00:27:16.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:16.153 "dma_device_type": 2 00:27:16.153 }, 00:27:16.153 { 00:27:16.153 "dma_device_id": "system", 00:27:16.153 "dma_device_type": 1 00:27:16.153 }, 00:27:16.153 { 00:27:16.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:16.153 "dma_device_type": 2 00:27:16.153 } 00:27:16.153 ], 00:27:16.153 "driver_specific": { 00:27:16.153 "raid": { 00:27:16.153 "uuid": "dab85c7a-9590-4369-9bca-26fd0b7d6ac0", 00:27:16.153 "strip_size_kb": 64, 00:27:16.153 "state": "online", 00:27:16.153 "raid_level": "raid0", 00:27:16.153 "superblock": false, 00:27:16.153 "num_base_bdevs": 4, 00:27:16.153 "num_base_bdevs_discovered": 4, 00:27:16.153 "num_base_bdevs_operational": 4, 00:27:16.153 "base_bdevs_list": [ 00:27:16.153 { 00:27:16.153 "name": "BaseBdev1", 00:27:16.153 "uuid": "6b99a578-ec1a-4e98-9616-4f695c8d3579", 00:27:16.153 "is_configured": true, 00:27:16.153 "data_offset": 0, 00:27:16.153 "data_size": 65536 00:27:16.153 }, 00:27:16.153 { 00:27:16.153 "name": "BaseBdev2", 00:27:16.153 "uuid": "5a5a0c86-033d-4a51-bf21-67e2d59a2383", 00:27:16.153 "is_configured": true, 00:27:16.153 "data_offset": 0, 00:27:16.154 "data_size": 65536 00:27:16.154 }, 00:27:16.154 { 00:27:16.154 "name": "BaseBdev3", 00:27:16.154 "uuid": "bf09e1fb-4d74-47a6-8297-dbc2207017e3", 00:27:16.154 "is_configured": true, 00:27:16.154 "data_offset": 0, 00:27:16.154 "data_size": 65536 00:27:16.154 }, 00:27:16.154 { 00:27:16.154 "name": "BaseBdev4", 00:27:16.154 "uuid": "b45515eb-b901-400f-9fa7-4cba7a06ca5e", 00:27:16.154 "is_configured": true, 00:27:16.154 "data_offset": 0, 00:27:16.154 "data_size": 65536 00:27:16.154 } 00:27:16.154 ] 00:27:16.154 } 00:27:16.154 } 00:27:16.154 }' 00:27:16.154 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:16.154 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:27:16.154 BaseBdev2 00:27:16.154 BaseBdev3 00:27:16.154 BaseBdev4' 00:27:16.154 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:16.412 [2024-11-05 15:56:48.704498] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:16.412 [2024-11-05 15:56:48.704522] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:16.412 [2024-11-05 15:56:48.704561] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:16.412 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:27:16.413 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:16.413 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:16.413 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:16.413 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:16.413 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:16.413 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:16.413 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:16.413 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:16.413 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:16.413 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.413 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:16.413 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.413 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:16.413 "name": "Existed_Raid", 00:27:16.413 "uuid": "dab85c7a-9590-4369-9bca-26fd0b7d6ac0", 00:27:16.413 "strip_size_kb": 64, 00:27:16.413 "state": "offline", 00:27:16.413 "raid_level": "raid0", 00:27:16.413 "superblock": false, 00:27:16.413 "num_base_bdevs": 4, 00:27:16.413 "num_base_bdevs_discovered": 3, 00:27:16.413 "num_base_bdevs_operational": 3, 00:27:16.413 "base_bdevs_list": [ 00:27:16.413 { 00:27:16.413 "name": null, 00:27:16.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:16.413 "is_configured": false, 00:27:16.413 "data_offset": 0, 00:27:16.413 "data_size": 65536 00:27:16.413 }, 00:27:16.413 { 00:27:16.413 "name": "BaseBdev2", 00:27:16.413 "uuid": "5a5a0c86-033d-4a51-bf21-67e2d59a2383", 00:27:16.413 "is_configured": true, 00:27:16.413 "data_offset": 0, 00:27:16.413 "data_size": 65536 00:27:16.413 }, 00:27:16.413 { 00:27:16.413 "name": "BaseBdev3", 00:27:16.413 "uuid": "bf09e1fb-4d74-47a6-8297-dbc2207017e3", 00:27:16.413 "is_configured": true, 00:27:16.413 "data_offset": 0, 00:27:16.413 "data_size": 65536 00:27:16.413 }, 00:27:16.413 { 00:27:16.413 "name": "BaseBdev4", 00:27:16.413 "uuid": "b45515eb-b901-400f-9fa7-4cba7a06ca5e", 00:27:16.413 "is_configured": true, 00:27:16.413 "data_offset": 0, 00:27:16.413 "data_size": 65536 00:27:16.413 } 00:27:16.413 ] 00:27:16.413 }' 00:27:16.413 15:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:16.413 15:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:16.670 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:27:16.670 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:16.670 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:16.670 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:16.670 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.670 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:16.670 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:16.928 [2024-11-05 15:56:49.098273] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:16.928 [2024-11-05 15:56:49.179718] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:16.928 [2024-11-05 15:56:49.264825] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:27:16.928 [2024-11-05 15:56:49.264945] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:16.928 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.187 BaseBdev2 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.187 [ 00:27:17.187 { 00:27:17.187 "name": "BaseBdev2", 00:27:17.187 "aliases": [ 00:27:17.187 "d9fa75b2-baa3-4e42-8663-33b5a0b6c836" 00:27:17.187 ], 00:27:17.187 "product_name": "Malloc disk", 00:27:17.187 "block_size": 512, 00:27:17.187 "num_blocks": 65536, 00:27:17.187 "uuid": "d9fa75b2-baa3-4e42-8663-33b5a0b6c836", 00:27:17.187 "assigned_rate_limits": { 00:27:17.187 "rw_ios_per_sec": 0, 00:27:17.187 "rw_mbytes_per_sec": 0, 00:27:17.187 "r_mbytes_per_sec": 0, 00:27:17.187 "w_mbytes_per_sec": 0 00:27:17.187 }, 00:27:17.187 "claimed": false, 00:27:17.187 "zoned": false, 00:27:17.187 "supported_io_types": { 00:27:17.187 "read": true, 00:27:17.187 "write": true, 00:27:17.187 "unmap": true, 00:27:17.187 "flush": true, 00:27:17.187 "reset": true, 00:27:17.187 "nvme_admin": false, 00:27:17.187 "nvme_io": false, 00:27:17.187 "nvme_io_md": false, 00:27:17.187 "write_zeroes": true, 00:27:17.187 "zcopy": true, 00:27:17.187 "get_zone_info": false, 00:27:17.187 "zone_management": false, 00:27:17.187 "zone_append": false, 00:27:17.187 "compare": false, 00:27:17.187 "compare_and_write": false, 00:27:17.187 "abort": true, 00:27:17.187 "seek_hole": false, 00:27:17.187 "seek_data": false, 00:27:17.187 "copy": true, 00:27:17.187 "nvme_iov_md": false 00:27:17.187 }, 00:27:17.187 "memory_domains": [ 00:27:17.187 { 00:27:17.187 "dma_device_id": "system", 00:27:17.187 "dma_device_type": 1 00:27:17.187 }, 00:27:17.187 { 00:27:17.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:17.187 "dma_device_type": 2 00:27:17.187 } 00:27:17.187 ], 00:27:17.187 "driver_specific": {} 00:27:17.187 } 00:27:17.187 ] 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.187 BaseBdev3 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.187 [ 00:27:17.187 { 00:27:17.187 "name": "BaseBdev3", 00:27:17.187 "aliases": [ 00:27:17.187 "ce564503-3ecd-410c-b314-3c62cf80c7c9" 00:27:17.187 ], 00:27:17.187 "product_name": "Malloc disk", 00:27:17.187 "block_size": 512, 00:27:17.187 "num_blocks": 65536, 00:27:17.187 "uuid": "ce564503-3ecd-410c-b314-3c62cf80c7c9", 00:27:17.187 "assigned_rate_limits": { 00:27:17.187 "rw_ios_per_sec": 0, 00:27:17.187 "rw_mbytes_per_sec": 0, 00:27:17.187 "r_mbytes_per_sec": 0, 00:27:17.187 "w_mbytes_per_sec": 0 00:27:17.187 }, 00:27:17.187 "claimed": false, 00:27:17.187 "zoned": false, 00:27:17.187 "supported_io_types": { 00:27:17.187 "read": true, 00:27:17.187 "write": true, 00:27:17.187 "unmap": true, 00:27:17.187 "flush": true, 00:27:17.187 "reset": true, 00:27:17.187 "nvme_admin": false, 00:27:17.187 "nvme_io": false, 00:27:17.187 "nvme_io_md": false, 00:27:17.187 "write_zeroes": true, 00:27:17.187 "zcopy": true, 00:27:17.187 "get_zone_info": false, 00:27:17.187 "zone_management": false, 00:27:17.187 "zone_append": false, 00:27:17.187 "compare": false, 00:27:17.187 "compare_and_write": false, 00:27:17.187 "abort": true, 00:27:17.187 "seek_hole": false, 00:27:17.187 "seek_data": false, 00:27:17.187 "copy": true, 00:27:17.187 "nvme_iov_md": false 00:27:17.187 }, 00:27:17.187 "memory_domains": [ 00:27:17.187 { 00:27:17.187 "dma_device_id": "system", 00:27:17.187 "dma_device_type": 1 00:27:17.187 }, 00:27:17.187 { 00:27:17.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:17.187 "dma_device_type": 2 00:27:17.187 } 00:27:17.187 ], 00:27:17.187 "driver_specific": {} 00:27:17.187 } 00:27:17.187 ] 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.187 BaseBdev4 00:27:17.187 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.188 [ 00:27:17.188 { 00:27:17.188 "name": "BaseBdev4", 00:27:17.188 "aliases": [ 00:27:17.188 "fa334eaa-b0ec-4f03-9e18-ccb3bde8a134" 00:27:17.188 ], 00:27:17.188 "product_name": "Malloc disk", 00:27:17.188 "block_size": 512, 00:27:17.188 "num_blocks": 65536, 00:27:17.188 "uuid": "fa334eaa-b0ec-4f03-9e18-ccb3bde8a134", 00:27:17.188 "assigned_rate_limits": { 00:27:17.188 "rw_ios_per_sec": 0, 00:27:17.188 "rw_mbytes_per_sec": 0, 00:27:17.188 "r_mbytes_per_sec": 0, 00:27:17.188 "w_mbytes_per_sec": 0 00:27:17.188 }, 00:27:17.188 "claimed": false, 00:27:17.188 "zoned": false, 00:27:17.188 "supported_io_types": { 00:27:17.188 "read": true, 00:27:17.188 "write": true, 00:27:17.188 "unmap": true, 00:27:17.188 "flush": true, 00:27:17.188 "reset": true, 00:27:17.188 "nvme_admin": false, 00:27:17.188 "nvme_io": false, 00:27:17.188 "nvme_io_md": false, 00:27:17.188 "write_zeroes": true, 00:27:17.188 "zcopy": true, 00:27:17.188 "get_zone_info": false, 00:27:17.188 "zone_management": false, 00:27:17.188 "zone_append": false, 00:27:17.188 "compare": false, 00:27:17.188 "compare_and_write": false, 00:27:17.188 "abort": true, 00:27:17.188 "seek_hole": false, 00:27:17.188 "seek_data": false, 00:27:17.188 "copy": true, 00:27:17.188 "nvme_iov_md": false 00:27:17.188 }, 00:27:17.188 "memory_domains": [ 00:27:17.188 { 00:27:17.188 "dma_device_id": "system", 00:27:17.188 "dma_device_type": 1 00:27:17.188 }, 00:27:17.188 { 00:27:17.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:17.188 "dma_device_type": 2 00:27:17.188 } 00:27:17.188 ], 00:27:17.188 "driver_specific": {} 00:27:17.188 } 00:27:17.188 ] 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.188 [2024-11-05 15:56:49.509805] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:17.188 [2024-11-05 15:56:49.509961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:17.188 [2024-11-05 15:56:49.510026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:17.188 [2024-11-05 15:56:49.511596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:17.188 [2024-11-05 15:56:49.511710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:17.188 "name": "Existed_Raid", 00:27:17.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:17.188 "strip_size_kb": 64, 00:27:17.188 "state": "configuring", 00:27:17.188 "raid_level": "raid0", 00:27:17.188 "superblock": false, 00:27:17.188 "num_base_bdevs": 4, 00:27:17.188 "num_base_bdevs_discovered": 3, 00:27:17.188 "num_base_bdevs_operational": 4, 00:27:17.188 "base_bdevs_list": [ 00:27:17.188 { 00:27:17.188 "name": "BaseBdev1", 00:27:17.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:17.188 "is_configured": false, 00:27:17.188 "data_offset": 0, 00:27:17.188 "data_size": 0 00:27:17.188 }, 00:27:17.188 { 00:27:17.188 "name": "BaseBdev2", 00:27:17.188 "uuid": "d9fa75b2-baa3-4e42-8663-33b5a0b6c836", 00:27:17.188 "is_configured": true, 00:27:17.188 "data_offset": 0, 00:27:17.188 "data_size": 65536 00:27:17.188 }, 00:27:17.188 { 00:27:17.188 "name": "BaseBdev3", 00:27:17.188 "uuid": "ce564503-3ecd-410c-b314-3c62cf80c7c9", 00:27:17.188 "is_configured": true, 00:27:17.188 "data_offset": 0, 00:27:17.188 "data_size": 65536 00:27:17.188 }, 00:27:17.188 { 00:27:17.188 "name": "BaseBdev4", 00:27:17.188 "uuid": "fa334eaa-b0ec-4f03-9e18-ccb3bde8a134", 00:27:17.188 "is_configured": true, 00:27:17.188 "data_offset": 0, 00:27:17.188 "data_size": 65536 00:27:17.188 } 00:27:17.188 ] 00:27:17.188 }' 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:17.188 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.444 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:27:17.444 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.444 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.444 [2024-11-05 15:56:49.829872] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:17.444 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.444 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:27:17.444 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:17.444 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:17.444 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:17.444 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:17.444 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:17.444 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:17.444 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:17.444 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:17.444 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:17.444 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:17.444 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.444 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.444 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:17.444 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.701 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:17.701 "name": "Existed_Raid", 00:27:17.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:17.701 "strip_size_kb": 64, 00:27:17.701 "state": "configuring", 00:27:17.701 "raid_level": "raid0", 00:27:17.701 "superblock": false, 00:27:17.701 "num_base_bdevs": 4, 00:27:17.701 "num_base_bdevs_discovered": 2, 00:27:17.701 "num_base_bdevs_operational": 4, 00:27:17.701 "base_bdevs_list": [ 00:27:17.701 { 00:27:17.701 "name": "BaseBdev1", 00:27:17.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:17.701 "is_configured": false, 00:27:17.701 "data_offset": 0, 00:27:17.701 "data_size": 0 00:27:17.701 }, 00:27:17.701 { 00:27:17.701 "name": null, 00:27:17.701 "uuid": "d9fa75b2-baa3-4e42-8663-33b5a0b6c836", 00:27:17.701 "is_configured": false, 00:27:17.701 "data_offset": 0, 00:27:17.701 "data_size": 65536 00:27:17.701 }, 00:27:17.701 { 00:27:17.701 "name": "BaseBdev3", 00:27:17.701 "uuid": "ce564503-3ecd-410c-b314-3c62cf80c7c9", 00:27:17.701 "is_configured": true, 00:27:17.701 "data_offset": 0, 00:27:17.701 "data_size": 65536 00:27:17.701 }, 00:27:17.701 { 00:27:17.701 "name": "BaseBdev4", 00:27:17.701 "uuid": "fa334eaa-b0ec-4f03-9e18-ccb3bde8a134", 00:27:17.701 "is_configured": true, 00:27:17.701 "data_offset": 0, 00:27:17.701 "data_size": 65536 00:27:17.701 } 00:27:17.701 ] 00:27:17.701 }' 00:27:17.701 15:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:17.701 15:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.958 [2024-11-05 15:56:50.192184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:17.958 BaseBdev1 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.958 [ 00:27:17.958 { 00:27:17.958 "name": "BaseBdev1", 00:27:17.958 "aliases": [ 00:27:17.958 "017eb28c-db18-416b-9e0e-9f5d4a24e300" 00:27:17.958 ], 00:27:17.958 "product_name": "Malloc disk", 00:27:17.958 "block_size": 512, 00:27:17.958 "num_blocks": 65536, 00:27:17.958 "uuid": "017eb28c-db18-416b-9e0e-9f5d4a24e300", 00:27:17.958 "assigned_rate_limits": { 00:27:17.958 "rw_ios_per_sec": 0, 00:27:17.958 "rw_mbytes_per_sec": 0, 00:27:17.958 "r_mbytes_per_sec": 0, 00:27:17.958 "w_mbytes_per_sec": 0 00:27:17.958 }, 00:27:17.958 "claimed": true, 00:27:17.958 "claim_type": "exclusive_write", 00:27:17.958 "zoned": false, 00:27:17.958 "supported_io_types": { 00:27:17.958 "read": true, 00:27:17.958 "write": true, 00:27:17.958 "unmap": true, 00:27:17.958 "flush": true, 00:27:17.958 "reset": true, 00:27:17.958 "nvme_admin": false, 00:27:17.958 "nvme_io": false, 00:27:17.958 "nvme_io_md": false, 00:27:17.958 "write_zeroes": true, 00:27:17.958 "zcopy": true, 00:27:17.958 "get_zone_info": false, 00:27:17.958 "zone_management": false, 00:27:17.958 "zone_append": false, 00:27:17.958 "compare": false, 00:27:17.958 "compare_and_write": false, 00:27:17.958 "abort": true, 00:27:17.958 "seek_hole": false, 00:27:17.958 "seek_data": false, 00:27:17.958 "copy": true, 00:27:17.958 "nvme_iov_md": false 00:27:17.958 }, 00:27:17.958 "memory_domains": [ 00:27:17.958 { 00:27:17.958 "dma_device_id": "system", 00:27:17.958 "dma_device_type": 1 00:27:17.958 }, 00:27:17.958 { 00:27:17.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:17.958 "dma_device_type": 2 00:27:17.958 } 00:27:17.958 ], 00:27:17.958 "driver_specific": {} 00:27:17.958 } 00:27:17.958 ] 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.958 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:17.959 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.959 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:17.959 "name": "Existed_Raid", 00:27:17.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:17.959 "strip_size_kb": 64, 00:27:17.959 "state": "configuring", 00:27:17.959 "raid_level": "raid0", 00:27:17.959 "superblock": false, 00:27:17.959 "num_base_bdevs": 4, 00:27:17.959 "num_base_bdevs_discovered": 3, 00:27:17.959 "num_base_bdevs_operational": 4, 00:27:17.959 "base_bdevs_list": [ 00:27:17.959 { 00:27:17.959 "name": "BaseBdev1", 00:27:17.959 "uuid": "017eb28c-db18-416b-9e0e-9f5d4a24e300", 00:27:17.959 "is_configured": true, 00:27:17.959 "data_offset": 0, 00:27:17.959 "data_size": 65536 00:27:17.959 }, 00:27:17.959 { 00:27:17.959 "name": null, 00:27:17.959 "uuid": "d9fa75b2-baa3-4e42-8663-33b5a0b6c836", 00:27:17.959 "is_configured": false, 00:27:17.959 "data_offset": 0, 00:27:17.959 "data_size": 65536 00:27:17.959 }, 00:27:17.959 { 00:27:17.959 "name": "BaseBdev3", 00:27:17.959 "uuid": "ce564503-3ecd-410c-b314-3c62cf80c7c9", 00:27:17.959 "is_configured": true, 00:27:17.959 "data_offset": 0, 00:27:17.959 "data_size": 65536 00:27:17.959 }, 00:27:17.959 { 00:27:17.959 "name": "BaseBdev4", 00:27:17.959 "uuid": "fa334eaa-b0ec-4f03-9e18-ccb3bde8a134", 00:27:17.959 "is_configured": true, 00:27:17.959 "data_offset": 0, 00:27:17.959 "data_size": 65536 00:27:17.959 } 00:27:17.959 ] 00:27:17.959 }' 00:27:17.959 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:17.959 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:18.217 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:18.217 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:18.217 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.217 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:18.217 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.217 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:27:18.217 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:27:18.217 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.217 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:18.217 [2024-11-05 15:56:50.544300] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:18.217 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.217 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:27:18.217 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:18.217 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:18.217 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:18.217 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:18.217 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:18.217 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:18.217 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:18.217 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:18.217 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:18.217 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:18.217 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.217 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:18.217 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:18.217 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.217 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:18.217 "name": "Existed_Raid", 00:27:18.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:18.217 "strip_size_kb": 64, 00:27:18.217 "state": "configuring", 00:27:18.217 "raid_level": "raid0", 00:27:18.217 "superblock": false, 00:27:18.217 "num_base_bdevs": 4, 00:27:18.217 "num_base_bdevs_discovered": 2, 00:27:18.217 "num_base_bdevs_operational": 4, 00:27:18.217 "base_bdevs_list": [ 00:27:18.217 { 00:27:18.217 "name": "BaseBdev1", 00:27:18.217 "uuid": "017eb28c-db18-416b-9e0e-9f5d4a24e300", 00:27:18.217 "is_configured": true, 00:27:18.217 "data_offset": 0, 00:27:18.217 "data_size": 65536 00:27:18.217 }, 00:27:18.217 { 00:27:18.217 "name": null, 00:27:18.217 "uuid": "d9fa75b2-baa3-4e42-8663-33b5a0b6c836", 00:27:18.217 "is_configured": false, 00:27:18.217 "data_offset": 0, 00:27:18.217 "data_size": 65536 00:27:18.217 }, 00:27:18.217 { 00:27:18.217 "name": null, 00:27:18.217 "uuid": "ce564503-3ecd-410c-b314-3c62cf80c7c9", 00:27:18.217 "is_configured": false, 00:27:18.217 "data_offset": 0, 00:27:18.217 "data_size": 65536 00:27:18.217 }, 00:27:18.217 { 00:27:18.217 "name": "BaseBdev4", 00:27:18.217 "uuid": "fa334eaa-b0ec-4f03-9e18-ccb3bde8a134", 00:27:18.217 "is_configured": true, 00:27:18.217 "data_offset": 0, 00:27:18.217 "data_size": 65536 00:27:18.217 } 00:27:18.217 ] 00:27:18.217 }' 00:27:18.217 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:18.217 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:18.475 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:18.475 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:18.475 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.475 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:18.475 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.475 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:27:18.475 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:27:18.475 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.475 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:18.733 [2024-11-05 15:56:50.892366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:18.733 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.733 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:27:18.733 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:18.733 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:18.733 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:18.733 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:18.733 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:18.733 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:18.733 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:18.733 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:18.733 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:18.733 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:18.733 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:18.733 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.733 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:18.733 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.733 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:18.733 "name": "Existed_Raid", 00:27:18.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:18.733 "strip_size_kb": 64, 00:27:18.733 "state": "configuring", 00:27:18.733 "raid_level": "raid0", 00:27:18.733 "superblock": false, 00:27:18.733 "num_base_bdevs": 4, 00:27:18.733 "num_base_bdevs_discovered": 3, 00:27:18.733 "num_base_bdevs_operational": 4, 00:27:18.733 "base_bdevs_list": [ 00:27:18.733 { 00:27:18.733 "name": "BaseBdev1", 00:27:18.733 "uuid": "017eb28c-db18-416b-9e0e-9f5d4a24e300", 00:27:18.733 "is_configured": true, 00:27:18.733 "data_offset": 0, 00:27:18.733 "data_size": 65536 00:27:18.733 }, 00:27:18.733 { 00:27:18.733 "name": null, 00:27:18.733 "uuid": "d9fa75b2-baa3-4e42-8663-33b5a0b6c836", 00:27:18.733 "is_configured": false, 00:27:18.733 "data_offset": 0, 00:27:18.733 "data_size": 65536 00:27:18.733 }, 00:27:18.733 { 00:27:18.733 "name": "BaseBdev3", 00:27:18.733 "uuid": "ce564503-3ecd-410c-b314-3c62cf80c7c9", 00:27:18.733 "is_configured": true, 00:27:18.733 "data_offset": 0, 00:27:18.733 "data_size": 65536 00:27:18.733 }, 00:27:18.733 { 00:27:18.733 "name": "BaseBdev4", 00:27:18.733 "uuid": "fa334eaa-b0ec-4f03-9e18-ccb3bde8a134", 00:27:18.733 "is_configured": true, 00:27:18.733 "data_offset": 0, 00:27:18.733 "data_size": 65536 00:27:18.733 } 00:27:18.733 ] 00:27:18.733 }' 00:27:18.733 15:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:18.733 15:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:18.991 15:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:18.991 15:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:18.991 15:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.991 15:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:18.991 15:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.991 15:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:27:18.991 15:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:27:18.991 15:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.991 15:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:18.991 [2024-11-05 15:56:51.244450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:18.991 15:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.991 15:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:27:18.991 15:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:18.991 15:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:18.991 15:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:18.991 15:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:18.991 15:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:18.991 15:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:18.991 15:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:18.991 15:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:18.991 15:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:18.991 15:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:18.992 15:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:18.992 15:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.992 15:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:18.992 15:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.992 15:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:18.992 "name": "Existed_Raid", 00:27:18.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:18.992 "strip_size_kb": 64, 00:27:18.992 "state": "configuring", 00:27:18.992 "raid_level": "raid0", 00:27:18.992 "superblock": false, 00:27:18.992 "num_base_bdevs": 4, 00:27:18.992 "num_base_bdevs_discovered": 2, 00:27:18.992 "num_base_bdevs_operational": 4, 00:27:18.992 "base_bdevs_list": [ 00:27:18.992 { 00:27:18.992 "name": null, 00:27:18.992 "uuid": "017eb28c-db18-416b-9e0e-9f5d4a24e300", 00:27:18.992 "is_configured": false, 00:27:18.992 "data_offset": 0, 00:27:18.992 "data_size": 65536 00:27:18.992 }, 00:27:18.992 { 00:27:18.992 "name": null, 00:27:18.992 "uuid": "d9fa75b2-baa3-4e42-8663-33b5a0b6c836", 00:27:18.992 "is_configured": false, 00:27:18.992 "data_offset": 0, 00:27:18.992 "data_size": 65536 00:27:18.992 }, 00:27:18.992 { 00:27:18.992 "name": "BaseBdev3", 00:27:18.992 "uuid": "ce564503-3ecd-410c-b314-3c62cf80c7c9", 00:27:18.992 "is_configured": true, 00:27:18.992 "data_offset": 0, 00:27:18.992 "data_size": 65536 00:27:18.992 }, 00:27:18.992 { 00:27:18.992 "name": "BaseBdev4", 00:27:18.992 "uuid": "fa334eaa-b0ec-4f03-9e18-ccb3bde8a134", 00:27:18.992 "is_configured": true, 00:27:18.992 "data_offset": 0, 00:27:18.992 "data_size": 65536 00:27:18.992 } 00:27:18.992 ] 00:27:18.992 }' 00:27:18.992 15:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:18.992 15:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:19.250 15:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:19.250 15:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:19.250 15:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.250 15:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:19.250 15:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.250 15:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:27:19.250 15:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:27:19.250 15:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.250 15:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:19.250 [2024-11-05 15:56:51.634227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:19.250 15:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.250 15:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:27:19.250 15:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:19.250 15:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:19.250 15:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:19.250 15:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:19.250 15:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:19.250 15:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:19.250 15:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:19.250 15:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:19.250 15:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:19.250 15:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:19.250 15:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:19.250 15:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.250 15:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:19.250 15:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.509 15:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:19.509 "name": "Existed_Raid", 00:27:19.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:19.509 "strip_size_kb": 64, 00:27:19.509 "state": "configuring", 00:27:19.509 "raid_level": "raid0", 00:27:19.509 "superblock": false, 00:27:19.509 "num_base_bdevs": 4, 00:27:19.509 "num_base_bdevs_discovered": 3, 00:27:19.509 "num_base_bdevs_operational": 4, 00:27:19.509 "base_bdevs_list": [ 00:27:19.509 { 00:27:19.509 "name": null, 00:27:19.509 "uuid": "017eb28c-db18-416b-9e0e-9f5d4a24e300", 00:27:19.509 "is_configured": false, 00:27:19.509 "data_offset": 0, 00:27:19.509 "data_size": 65536 00:27:19.509 }, 00:27:19.509 { 00:27:19.509 "name": "BaseBdev2", 00:27:19.509 "uuid": "d9fa75b2-baa3-4e42-8663-33b5a0b6c836", 00:27:19.509 "is_configured": true, 00:27:19.509 "data_offset": 0, 00:27:19.509 "data_size": 65536 00:27:19.509 }, 00:27:19.509 { 00:27:19.509 "name": "BaseBdev3", 00:27:19.509 "uuid": "ce564503-3ecd-410c-b314-3c62cf80c7c9", 00:27:19.509 "is_configured": true, 00:27:19.509 "data_offset": 0, 00:27:19.509 "data_size": 65536 00:27:19.509 }, 00:27:19.509 { 00:27:19.509 "name": "BaseBdev4", 00:27:19.509 "uuid": "fa334eaa-b0ec-4f03-9e18-ccb3bde8a134", 00:27:19.509 "is_configured": true, 00:27:19.509 "data_offset": 0, 00:27:19.509 "data_size": 65536 00:27:19.509 } 00:27:19.509 ] 00:27:19.509 }' 00:27:19.509 15:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:19.509 15:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:19.767 15:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:19.768 15:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:19.768 15:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.768 15:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:19.768 15:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 017eb28c-db18-416b-9e0e-9f5d4a24e300 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:19.768 [2024-11-05 15:56:52.072213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:27:19.768 NewBaseBdev 00:27:19.768 [2024-11-05 15:56:52.072334] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:27:19.768 [2024-11-05 15:56:52.072346] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:27:19.768 [2024-11-05 15:56:52.072565] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:27:19.768 [2024-11-05 15:56:52.072669] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:27:19.768 [2024-11-05 15:56:52.072677] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:27:19.768 [2024-11-05 15:56:52.072839] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:19.768 [ 00:27:19.768 { 00:27:19.768 "name": "NewBaseBdev", 00:27:19.768 "aliases": [ 00:27:19.768 "017eb28c-db18-416b-9e0e-9f5d4a24e300" 00:27:19.768 ], 00:27:19.768 "product_name": "Malloc disk", 00:27:19.768 "block_size": 512, 00:27:19.768 "num_blocks": 65536, 00:27:19.768 "uuid": "017eb28c-db18-416b-9e0e-9f5d4a24e300", 00:27:19.768 "assigned_rate_limits": { 00:27:19.768 "rw_ios_per_sec": 0, 00:27:19.768 "rw_mbytes_per_sec": 0, 00:27:19.768 "r_mbytes_per_sec": 0, 00:27:19.768 "w_mbytes_per_sec": 0 00:27:19.768 }, 00:27:19.768 "claimed": true, 00:27:19.768 "claim_type": "exclusive_write", 00:27:19.768 "zoned": false, 00:27:19.768 "supported_io_types": { 00:27:19.768 "read": true, 00:27:19.768 "write": true, 00:27:19.768 "unmap": true, 00:27:19.768 "flush": true, 00:27:19.768 "reset": true, 00:27:19.768 "nvme_admin": false, 00:27:19.768 "nvme_io": false, 00:27:19.768 "nvme_io_md": false, 00:27:19.768 "write_zeroes": true, 00:27:19.768 "zcopy": true, 00:27:19.768 "get_zone_info": false, 00:27:19.768 "zone_management": false, 00:27:19.768 "zone_append": false, 00:27:19.768 "compare": false, 00:27:19.768 "compare_and_write": false, 00:27:19.768 "abort": true, 00:27:19.768 "seek_hole": false, 00:27:19.768 "seek_data": false, 00:27:19.768 "copy": true, 00:27:19.768 "nvme_iov_md": false 00:27:19.768 }, 00:27:19.768 "memory_domains": [ 00:27:19.768 { 00:27:19.768 "dma_device_id": "system", 00:27:19.768 "dma_device_type": 1 00:27:19.768 }, 00:27:19.768 { 00:27:19.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:19.768 "dma_device_type": 2 00:27:19.768 } 00:27:19.768 ], 00:27:19.768 "driver_specific": {} 00:27:19.768 } 00:27:19.768 ] 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:19.768 "name": "Existed_Raid", 00:27:19.768 "uuid": "1c9a7e90-86fb-450c-a56e-cb988c9e60fd", 00:27:19.768 "strip_size_kb": 64, 00:27:19.768 "state": "online", 00:27:19.768 "raid_level": "raid0", 00:27:19.768 "superblock": false, 00:27:19.768 "num_base_bdevs": 4, 00:27:19.768 "num_base_bdevs_discovered": 4, 00:27:19.768 "num_base_bdevs_operational": 4, 00:27:19.768 "base_bdevs_list": [ 00:27:19.768 { 00:27:19.768 "name": "NewBaseBdev", 00:27:19.768 "uuid": "017eb28c-db18-416b-9e0e-9f5d4a24e300", 00:27:19.768 "is_configured": true, 00:27:19.768 "data_offset": 0, 00:27:19.768 "data_size": 65536 00:27:19.768 }, 00:27:19.768 { 00:27:19.768 "name": "BaseBdev2", 00:27:19.768 "uuid": "d9fa75b2-baa3-4e42-8663-33b5a0b6c836", 00:27:19.768 "is_configured": true, 00:27:19.768 "data_offset": 0, 00:27:19.768 "data_size": 65536 00:27:19.768 }, 00:27:19.768 { 00:27:19.768 "name": "BaseBdev3", 00:27:19.768 "uuid": "ce564503-3ecd-410c-b314-3c62cf80c7c9", 00:27:19.768 "is_configured": true, 00:27:19.768 "data_offset": 0, 00:27:19.768 "data_size": 65536 00:27:19.768 }, 00:27:19.768 { 00:27:19.768 "name": "BaseBdev4", 00:27:19.768 "uuid": "fa334eaa-b0ec-4f03-9e18-ccb3bde8a134", 00:27:19.768 "is_configured": true, 00:27:19.768 "data_offset": 0, 00:27:19.768 "data_size": 65536 00:27:19.768 } 00:27:19.768 ] 00:27:19.768 }' 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:19.768 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:20.335 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:27:20.335 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:27:20.335 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:20.335 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:20.335 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:27:20.335 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:20.335 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:20.335 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:27:20.335 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.335 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:20.335 [2024-11-05 15:56:52.456621] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:20.335 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.335 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:20.335 "name": "Existed_Raid", 00:27:20.335 "aliases": [ 00:27:20.335 "1c9a7e90-86fb-450c-a56e-cb988c9e60fd" 00:27:20.335 ], 00:27:20.335 "product_name": "Raid Volume", 00:27:20.335 "block_size": 512, 00:27:20.335 "num_blocks": 262144, 00:27:20.335 "uuid": "1c9a7e90-86fb-450c-a56e-cb988c9e60fd", 00:27:20.335 "assigned_rate_limits": { 00:27:20.335 "rw_ios_per_sec": 0, 00:27:20.335 "rw_mbytes_per_sec": 0, 00:27:20.335 "r_mbytes_per_sec": 0, 00:27:20.335 "w_mbytes_per_sec": 0 00:27:20.335 }, 00:27:20.335 "claimed": false, 00:27:20.335 "zoned": false, 00:27:20.335 "supported_io_types": { 00:27:20.335 "read": true, 00:27:20.335 "write": true, 00:27:20.335 "unmap": true, 00:27:20.335 "flush": true, 00:27:20.335 "reset": true, 00:27:20.335 "nvme_admin": false, 00:27:20.335 "nvme_io": false, 00:27:20.336 "nvme_io_md": false, 00:27:20.336 "write_zeroes": true, 00:27:20.336 "zcopy": false, 00:27:20.336 "get_zone_info": false, 00:27:20.336 "zone_management": false, 00:27:20.336 "zone_append": false, 00:27:20.336 "compare": false, 00:27:20.336 "compare_and_write": false, 00:27:20.336 "abort": false, 00:27:20.336 "seek_hole": false, 00:27:20.336 "seek_data": false, 00:27:20.336 "copy": false, 00:27:20.336 "nvme_iov_md": false 00:27:20.336 }, 00:27:20.336 "memory_domains": [ 00:27:20.336 { 00:27:20.336 "dma_device_id": "system", 00:27:20.336 "dma_device_type": 1 00:27:20.336 }, 00:27:20.336 { 00:27:20.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:20.336 "dma_device_type": 2 00:27:20.336 }, 00:27:20.336 { 00:27:20.336 "dma_device_id": "system", 00:27:20.336 "dma_device_type": 1 00:27:20.336 }, 00:27:20.336 { 00:27:20.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:20.336 "dma_device_type": 2 00:27:20.336 }, 00:27:20.336 { 00:27:20.336 "dma_device_id": "system", 00:27:20.336 "dma_device_type": 1 00:27:20.336 }, 00:27:20.336 { 00:27:20.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:20.336 "dma_device_type": 2 00:27:20.336 }, 00:27:20.336 { 00:27:20.336 "dma_device_id": "system", 00:27:20.336 "dma_device_type": 1 00:27:20.336 }, 00:27:20.336 { 00:27:20.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:20.336 "dma_device_type": 2 00:27:20.336 } 00:27:20.336 ], 00:27:20.336 "driver_specific": { 00:27:20.336 "raid": { 00:27:20.336 "uuid": "1c9a7e90-86fb-450c-a56e-cb988c9e60fd", 00:27:20.336 "strip_size_kb": 64, 00:27:20.336 "state": "online", 00:27:20.336 "raid_level": "raid0", 00:27:20.336 "superblock": false, 00:27:20.336 "num_base_bdevs": 4, 00:27:20.336 "num_base_bdevs_discovered": 4, 00:27:20.336 "num_base_bdevs_operational": 4, 00:27:20.336 "base_bdevs_list": [ 00:27:20.336 { 00:27:20.336 "name": "NewBaseBdev", 00:27:20.336 "uuid": "017eb28c-db18-416b-9e0e-9f5d4a24e300", 00:27:20.336 "is_configured": true, 00:27:20.336 "data_offset": 0, 00:27:20.336 "data_size": 65536 00:27:20.336 }, 00:27:20.336 { 00:27:20.336 "name": "BaseBdev2", 00:27:20.336 "uuid": "d9fa75b2-baa3-4e42-8663-33b5a0b6c836", 00:27:20.336 "is_configured": true, 00:27:20.336 "data_offset": 0, 00:27:20.336 "data_size": 65536 00:27:20.336 }, 00:27:20.336 { 00:27:20.336 "name": "BaseBdev3", 00:27:20.336 "uuid": "ce564503-3ecd-410c-b314-3c62cf80c7c9", 00:27:20.336 "is_configured": true, 00:27:20.336 "data_offset": 0, 00:27:20.336 "data_size": 65536 00:27:20.336 }, 00:27:20.336 { 00:27:20.336 "name": "BaseBdev4", 00:27:20.336 "uuid": "fa334eaa-b0ec-4f03-9e18-ccb3bde8a134", 00:27:20.336 "is_configured": true, 00:27:20.336 "data_offset": 0, 00:27:20.336 "data_size": 65536 00:27:20.336 } 00:27:20.336 ] 00:27:20.336 } 00:27:20.336 } 00:27:20.336 }' 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:27:20.336 BaseBdev2 00:27:20.336 BaseBdev3 00:27:20.336 BaseBdev4' 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:20.336 [2024-11-05 15:56:52.716367] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:20.336 [2024-11-05 15:56:52.716389] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:20.336 [2024-11-05 15:56:52.716444] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:20.336 [2024-11-05 15:56:52.716497] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:20.336 [2024-11-05 15:56:52.716505] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67464 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 67464 ']' 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 67464 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67464 00:27:20.336 killing process with pid 67464 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67464' 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 67464 00:27:20.336 [2024-11-05 15:56:52.746394] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:20.336 15:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 67464 00:27:20.594 [2024-11-05 15:56:52.935809] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:27:21.161 00:27:21.161 real 0m8.053s 00:27:21.161 user 0m13.099s 00:27:21.161 sys 0m1.294s 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:21.161 ************************************ 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:21.161 END TEST raid_state_function_test 00:27:21.161 ************************************ 00:27:21.161 15:56:53 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:27:21.161 15:56:53 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:27:21.161 15:56:53 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:21.161 15:56:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:21.161 ************************************ 00:27:21.161 START TEST raid_state_function_test_sb 00:27:21.161 ************************************ 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 true 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:27:21.161 Process raid pid: 68102 00:27:21.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68102 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68102' 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68102 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 68102 ']' 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:21.161 15:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:21.420 [2024-11-05 15:56:53.586679] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:27:21.420 [2024-11-05 15:56:53.586870] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:21.420 [2024-11-05 15:56:53.740900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.679 [2024-11-05 15:56:53.840185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.679 [2024-11-05 15:56:53.976640] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:21.679 [2024-11-05 15:56:53.976669] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:22.245 15:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:22.245 15:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:27:22.245 15:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:22.245 15:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.245 15:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:22.245 [2024-11-05 15:56:54.538043] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:22.245 [2024-11-05 15:56:54.538091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:22.245 [2024-11-05 15:56:54.538100] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:22.245 [2024-11-05 15:56:54.538110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:22.245 [2024-11-05 15:56:54.538116] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:22.245 [2024-11-05 15:56:54.538124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:22.245 [2024-11-05 15:56:54.538131] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:22.245 [2024-11-05 15:56:54.538139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:22.245 15:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.245 15:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:27:22.245 15:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:22.245 15:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:22.245 15:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:22.245 15:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:22.245 15:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:22.245 15:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:22.245 15:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:22.245 15:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:22.245 15:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:22.245 15:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:22.245 15:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:22.245 15:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.245 15:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:22.245 15:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.245 15:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:22.245 "name": "Existed_Raid", 00:27:22.245 "uuid": "de6aa76f-c905-4060-917f-f9fa8a4f0e5c", 00:27:22.245 "strip_size_kb": 64, 00:27:22.245 "state": "configuring", 00:27:22.245 "raid_level": "raid0", 00:27:22.245 "superblock": true, 00:27:22.245 "num_base_bdevs": 4, 00:27:22.245 "num_base_bdevs_discovered": 0, 00:27:22.245 "num_base_bdevs_operational": 4, 00:27:22.245 "base_bdevs_list": [ 00:27:22.245 { 00:27:22.245 "name": "BaseBdev1", 00:27:22.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:22.245 "is_configured": false, 00:27:22.245 "data_offset": 0, 00:27:22.245 "data_size": 0 00:27:22.245 }, 00:27:22.245 { 00:27:22.245 "name": "BaseBdev2", 00:27:22.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:22.245 "is_configured": false, 00:27:22.245 "data_offset": 0, 00:27:22.245 "data_size": 0 00:27:22.245 }, 00:27:22.245 { 00:27:22.245 "name": "BaseBdev3", 00:27:22.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:22.245 "is_configured": false, 00:27:22.245 "data_offset": 0, 00:27:22.245 "data_size": 0 00:27:22.245 }, 00:27:22.245 { 00:27:22.245 "name": "BaseBdev4", 00:27:22.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:22.245 "is_configured": false, 00:27:22.245 "data_offset": 0, 00:27:22.245 "data_size": 0 00:27:22.245 } 00:27:22.245 ] 00:27:22.245 }' 00:27:22.245 15:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:22.245 15:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:22.504 15:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:22.504 15:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.504 15:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:22.504 [2024-11-05 15:56:54.838061] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:22.504 [2024-11-05 15:56:54.838093] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:27:22.504 15:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.504 15:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:22.504 15:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.504 15:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:22.504 [2024-11-05 15:56:54.846068] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:22.504 [2024-11-05 15:56:54.846103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:22.504 [2024-11-05 15:56:54.846112] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:22.504 [2024-11-05 15:56:54.846121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:22.504 [2024-11-05 15:56:54.846127] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:22.504 [2024-11-05 15:56:54.846136] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:22.504 [2024-11-05 15:56:54.846142] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:22.504 [2024-11-05 15:56:54.846150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:22.504 15:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.504 15:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:27:22.504 15:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.504 15:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:22.504 [2024-11-05 15:56:54.878527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:22.504 BaseBdev1 00:27:22.505 15:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.505 15:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:27:22.505 15:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:27:22.505 15:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:22.505 15:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:27:22.505 15:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:22.505 15:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:22.505 15:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:22.505 15:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.505 15:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:22.505 15:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.505 15:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:22.505 15:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.505 15:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:22.505 [ 00:27:22.505 { 00:27:22.505 "name": "BaseBdev1", 00:27:22.505 "aliases": [ 00:27:22.505 "a1d54252-488f-43b4-bc8e-a37a67c5a0ea" 00:27:22.505 ], 00:27:22.505 "product_name": "Malloc disk", 00:27:22.505 "block_size": 512, 00:27:22.505 "num_blocks": 65536, 00:27:22.505 "uuid": "a1d54252-488f-43b4-bc8e-a37a67c5a0ea", 00:27:22.505 "assigned_rate_limits": { 00:27:22.505 "rw_ios_per_sec": 0, 00:27:22.505 "rw_mbytes_per_sec": 0, 00:27:22.505 "r_mbytes_per_sec": 0, 00:27:22.505 "w_mbytes_per_sec": 0 00:27:22.505 }, 00:27:22.505 "claimed": true, 00:27:22.505 "claim_type": "exclusive_write", 00:27:22.505 "zoned": false, 00:27:22.505 "supported_io_types": { 00:27:22.505 "read": true, 00:27:22.505 "write": true, 00:27:22.505 "unmap": true, 00:27:22.505 "flush": true, 00:27:22.505 "reset": true, 00:27:22.505 "nvme_admin": false, 00:27:22.505 "nvme_io": false, 00:27:22.505 "nvme_io_md": false, 00:27:22.505 "write_zeroes": true, 00:27:22.505 "zcopy": true, 00:27:22.505 "get_zone_info": false, 00:27:22.505 "zone_management": false, 00:27:22.505 "zone_append": false, 00:27:22.505 "compare": false, 00:27:22.505 "compare_and_write": false, 00:27:22.505 "abort": true, 00:27:22.505 "seek_hole": false, 00:27:22.505 "seek_data": false, 00:27:22.505 "copy": true, 00:27:22.505 "nvme_iov_md": false 00:27:22.505 }, 00:27:22.505 "memory_domains": [ 00:27:22.505 { 00:27:22.505 "dma_device_id": "system", 00:27:22.505 "dma_device_type": 1 00:27:22.505 }, 00:27:22.505 { 00:27:22.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:22.505 "dma_device_type": 2 00:27:22.505 } 00:27:22.505 ], 00:27:22.505 "driver_specific": {} 00:27:22.505 } 00:27:22.505 ] 00:27:22.505 15:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.505 15:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:27:22.505 15:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:27:22.505 15:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:22.505 15:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:22.505 15:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:22.505 15:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:22.505 15:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:22.505 15:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:22.505 15:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:22.505 15:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:22.505 15:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:22.505 15:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:22.505 15:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:22.505 15:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.505 15:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:22.763 15:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.763 15:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:22.763 "name": "Existed_Raid", 00:27:22.763 "uuid": "f978256b-76e5-4e3e-a333-71874990a476", 00:27:22.763 "strip_size_kb": 64, 00:27:22.763 "state": "configuring", 00:27:22.763 "raid_level": "raid0", 00:27:22.763 "superblock": true, 00:27:22.763 "num_base_bdevs": 4, 00:27:22.763 "num_base_bdevs_discovered": 1, 00:27:22.763 "num_base_bdevs_operational": 4, 00:27:22.763 "base_bdevs_list": [ 00:27:22.763 { 00:27:22.763 "name": "BaseBdev1", 00:27:22.763 "uuid": "a1d54252-488f-43b4-bc8e-a37a67c5a0ea", 00:27:22.763 "is_configured": true, 00:27:22.763 "data_offset": 2048, 00:27:22.763 "data_size": 63488 00:27:22.763 }, 00:27:22.763 { 00:27:22.763 "name": "BaseBdev2", 00:27:22.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:22.763 "is_configured": false, 00:27:22.763 "data_offset": 0, 00:27:22.763 "data_size": 0 00:27:22.763 }, 00:27:22.763 { 00:27:22.763 "name": "BaseBdev3", 00:27:22.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:22.763 "is_configured": false, 00:27:22.763 "data_offset": 0, 00:27:22.763 "data_size": 0 00:27:22.763 }, 00:27:22.763 { 00:27:22.763 "name": "BaseBdev4", 00:27:22.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:22.763 "is_configured": false, 00:27:22.763 "data_offset": 0, 00:27:22.763 "data_size": 0 00:27:22.763 } 00:27:22.763 ] 00:27:22.763 }' 00:27:22.763 15:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:22.763 15:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:23.022 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:23.022 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.022 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:23.022 [2024-11-05 15:56:55.214646] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:23.022 [2024-11-05 15:56:55.214798] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:27:23.022 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.022 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:23.022 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.022 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:23.022 [2024-11-05 15:56:55.222707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:23.022 [2024-11-05 15:56:55.224617] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:23.022 [2024-11-05 15:56:55.224739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:23.022 [2024-11-05 15:56:55.224800] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:23.022 [2024-11-05 15:56:55.224829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:23.022 [2024-11-05 15:56:55.225261] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:23.022 [2024-11-05 15:56:55.225295] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:23.022 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.022 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:27:23.022 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:23.022 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:27:23.022 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:23.022 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:23.022 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:23.022 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:23.022 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:23.022 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:23.022 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:23.022 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:23.022 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:23.022 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:23.022 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.022 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:23.022 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:23.022 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.022 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:23.022 "name": "Existed_Raid", 00:27:23.022 "uuid": "2f727f81-5f92-4df2-a45a-ffc0cdcc95c8", 00:27:23.022 "strip_size_kb": 64, 00:27:23.022 "state": "configuring", 00:27:23.022 "raid_level": "raid0", 00:27:23.022 "superblock": true, 00:27:23.022 "num_base_bdevs": 4, 00:27:23.022 "num_base_bdevs_discovered": 1, 00:27:23.022 "num_base_bdevs_operational": 4, 00:27:23.022 "base_bdevs_list": [ 00:27:23.022 { 00:27:23.022 "name": "BaseBdev1", 00:27:23.022 "uuid": "a1d54252-488f-43b4-bc8e-a37a67c5a0ea", 00:27:23.022 "is_configured": true, 00:27:23.022 "data_offset": 2048, 00:27:23.022 "data_size": 63488 00:27:23.022 }, 00:27:23.022 { 00:27:23.022 "name": "BaseBdev2", 00:27:23.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:23.022 "is_configured": false, 00:27:23.022 "data_offset": 0, 00:27:23.022 "data_size": 0 00:27:23.022 }, 00:27:23.022 { 00:27:23.022 "name": "BaseBdev3", 00:27:23.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:23.022 "is_configured": false, 00:27:23.022 "data_offset": 0, 00:27:23.022 "data_size": 0 00:27:23.022 }, 00:27:23.022 { 00:27:23.022 "name": "BaseBdev4", 00:27:23.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:23.022 "is_configured": false, 00:27:23.022 "data_offset": 0, 00:27:23.022 "data_size": 0 00:27:23.022 } 00:27:23.022 ] 00:27:23.022 }' 00:27:23.022 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:23.022 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:23.280 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:27:23.280 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.280 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:23.280 [2024-11-05 15:56:55.553103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:23.280 BaseBdev2 00:27:23.280 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.280 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:27:23.280 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:27:23.280 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:23.280 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:27:23.280 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:23.280 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:23.280 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:23.280 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.280 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:23.280 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.280 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:23.280 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.280 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:23.280 [ 00:27:23.280 { 00:27:23.280 "name": "BaseBdev2", 00:27:23.280 "aliases": [ 00:27:23.280 "ab076896-04db-4856-aea1-25197596ff13" 00:27:23.280 ], 00:27:23.280 "product_name": "Malloc disk", 00:27:23.280 "block_size": 512, 00:27:23.280 "num_blocks": 65536, 00:27:23.280 "uuid": "ab076896-04db-4856-aea1-25197596ff13", 00:27:23.280 "assigned_rate_limits": { 00:27:23.280 "rw_ios_per_sec": 0, 00:27:23.280 "rw_mbytes_per_sec": 0, 00:27:23.280 "r_mbytes_per_sec": 0, 00:27:23.280 "w_mbytes_per_sec": 0 00:27:23.280 }, 00:27:23.280 "claimed": true, 00:27:23.280 "claim_type": "exclusive_write", 00:27:23.280 "zoned": false, 00:27:23.280 "supported_io_types": { 00:27:23.280 "read": true, 00:27:23.280 "write": true, 00:27:23.280 "unmap": true, 00:27:23.280 "flush": true, 00:27:23.280 "reset": true, 00:27:23.280 "nvme_admin": false, 00:27:23.280 "nvme_io": false, 00:27:23.280 "nvme_io_md": false, 00:27:23.280 "write_zeroes": true, 00:27:23.280 "zcopy": true, 00:27:23.280 "get_zone_info": false, 00:27:23.280 "zone_management": false, 00:27:23.280 "zone_append": false, 00:27:23.280 "compare": false, 00:27:23.280 "compare_and_write": false, 00:27:23.280 "abort": true, 00:27:23.280 "seek_hole": false, 00:27:23.280 "seek_data": false, 00:27:23.280 "copy": true, 00:27:23.280 "nvme_iov_md": false 00:27:23.280 }, 00:27:23.280 "memory_domains": [ 00:27:23.280 { 00:27:23.280 "dma_device_id": "system", 00:27:23.280 "dma_device_type": 1 00:27:23.280 }, 00:27:23.280 { 00:27:23.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:23.280 "dma_device_type": 2 00:27:23.280 } 00:27:23.280 ], 00:27:23.280 "driver_specific": {} 00:27:23.280 } 00:27:23.280 ] 00:27:23.280 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.280 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:27:23.280 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:23.280 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:23.280 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:27:23.280 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:23.280 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:23.280 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:23.280 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:23.281 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:23.281 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:23.281 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:23.281 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:23.281 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:23.281 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:23.281 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:23.281 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.281 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:23.281 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.281 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:23.281 "name": "Existed_Raid", 00:27:23.281 "uuid": "2f727f81-5f92-4df2-a45a-ffc0cdcc95c8", 00:27:23.281 "strip_size_kb": 64, 00:27:23.281 "state": "configuring", 00:27:23.281 "raid_level": "raid0", 00:27:23.281 "superblock": true, 00:27:23.281 "num_base_bdevs": 4, 00:27:23.281 "num_base_bdevs_discovered": 2, 00:27:23.281 "num_base_bdevs_operational": 4, 00:27:23.281 "base_bdevs_list": [ 00:27:23.281 { 00:27:23.281 "name": "BaseBdev1", 00:27:23.281 "uuid": "a1d54252-488f-43b4-bc8e-a37a67c5a0ea", 00:27:23.281 "is_configured": true, 00:27:23.281 "data_offset": 2048, 00:27:23.281 "data_size": 63488 00:27:23.281 }, 00:27:23.281 { 00:27:23.281 "name": "BaseBdev2", 00:27:23.281 "uuid": "ab076896-04db-4856-aea1-25197596ff13", 00:27:23.281 "is_configured": true, 00:27:23.281 "data_offset": 2048, 00:27:23.281 "data_size": 63488 00:27:23.281 }, 00:27:23.281 { 00:27:23.281 "name": "BaseBdev3", 00:27:23.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:23.281 "is_configured": false, 00:27:23.281 "data_offset": 0, 00:27:23.281 "data_size": 0 00:27:23.281 }, 00:27:23.281 { 00:27:23.281 "name": "BaseBdev4", 00:27:23.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:23.281 "is_configured": false, 00:27:23.281 "data_offset": 0, 00:27:23.281 "data_size": 0 00:27:23.281 } 00:27:23.281 ] 00:27:23.281 }' 00:27:23.281 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:23.281 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:23.539 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:27:23.539 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.539 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:23.539 [2024-11-05 15:56:55.931440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:23.539 BaseBdev3 00:27:23.539 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.539 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:27:23.539 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:27:23.539 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:23.539 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:27:23.539 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:23.539 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:23.539 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:23.539 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.539 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:23.539 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.539 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:23.539 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.539 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:23.539 [ 00:27:23.539 { 00:27:23.539 "name": "BaseBdev3", 00:27:23.539 "aliases": [ 00:27:23.539 "9166338f-fe5c-4e18-b0f4-7955c93ec7d4" 00:27:23.539 ], 00:27:23.539 "product_name": "Malloc disk", 00:27:23.539 "block_size": 512, 00:27:23.539 "num_blocks": 65536, 00:27:23.539 "uuid": "9166338f-fe5c-4e18-b0f4-7955c93ec7d4", 00:27:23.539 "assigned_rate_limits": { 00:27:23.539 "rw_ios_per_sec": 0, 00:27:23.539 "rw_mbytes_per_sec": 0, 00:27:23.539 "r_mbytes_per_sec": 0, 00:27:23.539 "w_mbytes_per_sec": 0 00:27:23.539 }, 00:27:23.539 "claimed": true, 00:27:23.539 "claim_type": "exclusive_write", 00:27:23.539 "zoned": false, 00:27:23.539 "supported_io_types": { 00:27:23.539 "read": true, 00:27:23.539 "write": true, 00:27:23.539 "unmap": true, 00:27:23.539 "flush": true, 00:27:23.539 "reset": true, 00:27:23.539 "nvme_admin": false, 00:27:23.539 "nvme_io": false, 00:27:23.539 "nvme_io_md": false, 00:27:23.539 "write_zeroes": true, 00:27:23.539 "zcopy": true, 00:27:23.539 "get_zone_info": false, 00:27:23.539 "zone_management": false, 00:27:23.539 "zone_append": false, 00:27:23.539 "compare": false, 00:27:23.539 "compare_and_write": false, 00:27:23.539 "abort": true, 00:27:23.539 "seek_hole": false, 00:27:23.539 "seek_data": false, 00:27:23.539 "copy": true, 00:27:23.539 "nvme_iov_md": false 00:27:23.539 }, 00:27:23.539 "memory_domains": [ 00:27:23.539 { 00:27:23.539 "dma_device_id": "system", 00:27:23.539 "dma_device_type": 1 00:27:23.539 }, 00:27:23.539 { 00:27:23.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:23.539 "dma_device_type": 2 00:27:23.539 } 00:27:23.539 ], 00:27:23.539 "driver_specific": {} 00:27:23.539 } 00:27:23.539 ] 00:27:23.539 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.540 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:27:23.540 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:23.540 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:23.540 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:27:23.540 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:23.540 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:23.540 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:23.540 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:23.540 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:23.540 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:23.540 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:23.797 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:23.797 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:23.797 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:23.797 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.797 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:23.797 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:23.797 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.797 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:23.797 "name": "Existed_Raid", 00:27:23.797 "uuid": "2f727f81-5f92-4df2-a45a-ffc0cdcc95c8", 00:27:23.797 "strip_size_kb": 64, 00:27:23.797 "state": "configuring", 00:27:23.797 "raid_level": "raid0", 00:27:23.797 "superblock": true, 00:27:23.797 "num_base_bdevs": 4, 00:27:23.797 "num_base_bdevs_discovered": 3, 00:27:23.797 "num_base_bdevs_operational": 4, 00:27:23.797 "base_bdevs_list": [ 00:27:23.797 { 00:27:23.797 "name": "BaseBdev1", 00:27:23.797 "uuid": "a1d54252-488f-43b4-bc8e-a37a67c5a0ea", 00:27:23.797 "is_configured": true, 00:27:23.797 "data_offset": 2048, 00:27:23.797 "data_size": 63488 00:27:23.797 }, 00:27:23.797 { 00:27:23.797 "name": "BaseBdev2", 00:27:23.797 "uuid": "ab076896-04db-4856-aea1-25197596ff13", 00:27:23.797 "is_configured": true, 00:27:23.797 "data_offset": 2048, 00:27:23.797 "data_size": 63488 00:27:23.797 }, 00:27:23.797 { 00:27:23.797 "name": "BaseBdev3", 00:27:23.797 "uuid": "9166338f-fe5c-4e18-b0f4-7955c93ec7d4", 00:27:23.797 "is_configured": true, 00:27:23.797 "data_offset": 2048, 00:27:23.797 "data_size": 63488 00:27:23.797 }, 00:27:23.797 { 00:27:23.797 "name": "BaseBdev4", 00:27:23.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:23.797 "is_configured": false, 00:27:23.797 "data_offset": 0, 00:27:23.797 "data_size": 0 00:27:23.798 } 00:27:23.798 ] 00:27:23.798 }' 00:27:23.798 15:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:23.798 15:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:24.056 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:27:24.056 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.056 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:24.056 [2024-11-05 15:56:56.290073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:24.056 [2024-11-05 15:56:56.290445] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:24.056 [2024-11-05 15:56:56.290545] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:24.056 [2024-11-05 15:56:56.290839] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:24.056 [2024-11-05 15:56:56.291056] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:24.056 [2024-11-05 15:56:56.291137] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:27:24.056 [2024-11-05 15:56:56.291331] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:24.056 BaseBdev4 00:27:24.056 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.056 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:27:24.056 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:27:24.056 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:24.056 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:27:24.056 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:24.056 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:24.056 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:24.056 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.056 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:24.056 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.056 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:24.056 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.056 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:24.056 [ 00:27:24.056 { 00:27:24.056 "name": "BaseBdev4", 00:27:24.056 "aliases": [ 00:27:24.056 "50ea5bd9-fa5d-4c89-8aa9-d4c23fa204da" 00:27:24.056 ], 00:27:24.056 "product_name": "Malloc disk", 00:27:24.056 "block_size": 512, 00:27:24.056 "num_blocks": 65536, 00:27:24.056 "uuid": "50ea5bd9-fa5d-4c89-8aa9-d4c23fa204da", 00:27:24.056 "assigned_rate_limits": { 00:27:24.056 "rw_ios_per_sec": 0, 00:27:24.056 "rw_mbytes_per_sec": 0, 00:27:24.056 "r_mbytes_per_sec": 0, 00:27:24.056 "w_mbytes_per_sec": 0 00:27:24.056 }, 00:27:24.056 "claimed": true, 00:27:24.056 "claim_type": "exclusive_write", 00:27:24.056 "zoned": false, 00:27:24.056 "supported_io_types": { 00:27:24.056 "read": true, 00:27:24.056 "write": true, 00:27:24.056 "unmap": true, 00:27:24.056 "flush": true, 00:27:24.056 "reset": true, 00:27:24.056 "nvme_admin": false, 00:27:24.056 "nvme_io": false, 00:27:24.056 "nvme_io_md": false, 00:27:24.056 "write_zeroes": true, 00:27:24.056 "zcopy": true, 00:27:24.056 "get_zone_info": false, 00:27:24.056 "zone_management": false, 00:27:24.056 "zone_append": false, 00:27:24.056 "compare": false, 00:27:24.056 "compare_and_write": false, 00:27:24.056 "abort": true, 00:27:24.056 "seek_hole": false, 00:27:24.056 "seek_data": false, 00:27:24.056 "copy": true, 00:27:24.056 "nvme_iov_md": false 00:27:24.056 }, 00:27:24.056 "memory_domains": [ 00:27:24.056 { 00:27:24.056 "dma_device_id": "system", 00:27:24.056 "dma_device_type": 1 00:27:24.056 }, 00:27:24.056 { 00:27:24.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:24.056 "dma_device_type": 2 00:27:24.056 } 00:27:24.056 ], 00:27:24.056 "driver_specific": {} 00:27:24.056 } 00:27:24.056 ] 00:27:24.056 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.056 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:27:24.056 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:24.056 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:24.056 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:27:24.056 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:24.056 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:24.056 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:24.056 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:24.056 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:24.056 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:24.056 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:24.056 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:24.056 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:24.056 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:24.056 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:24.056 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.056 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:24.056 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.056 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:24.056 "name": "Existed_Raid", 00:27:24.056 "uuid": "2f727f81-5f92-4df2-a45a-ffc0cdcc95c8", 00:27:24.056 "strip_size_kb": 64, 00:27:24.056 "state": "online", 00:27:24.056 "raid_level": "raid0", 00:27:24.056 "superblock": true, 00:27:24.056 "num_base_bdevs": 4, 00:27:24.056 "num_base_bdevs_discovered": 4, 00:27:24.056 "num_base_bdevs_operational": 4, 00:27:24.056 "base_bdevs_list": [ 00:27:24.056 { 00:27:24.056 "name": "BaseBdev1", 00:27:24.056 "uuid": "a1d54252-488f-43b4-bc8e-a37a67c5a0ea", 00:27:24.056 "is_configured": true, 00:27:24.056 "data_offset": 2048, 00:27:24.056 "data_size": 63488 00:27:24.056 }, 00:27:24.056 { 00:27:24.056 "name": "BaseBdev2", 00:27:24.056 "uuid": "ab076896-04db-4856-aea1-25197596ff13", 00:27:24.056 "is_configured": true, 00:27:24.057 "data_offset": 2048, 00:27:24.057 "data_size": 63488 00:27:24.057 }, 00:27:24.057 { 00:27:24.057 "name": "BaseBdev3", 00:27:24.057 "uuid": "9166338f-fe5c-4e18-b0f4-7955c93ec7d4", 00:27:24.057 "is_configured": true, 00:27:24.057 "data_offset": 2048, 00:27:24.057 "data_size": 63488 00:27:24.057 }, 00:27:24.057 { 00:27:24.057 "name": "BaseBdev4", 00:27:24.057 "uuid": "50ea5bd9-fa5d-4c89-8aa9-d4c23fa204da", 00:27:24.057 "is_configured": true, 00:27:24.057 "data_offset": 2048, 00:27:24.057 "data_size": 63488 00:27:24.057 } 00:27:24.057 ] 00:27:24.057 }' 00:27:24.057 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:24.057 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:24.315 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:27:24.315 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:27:24.315 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:24.315 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:24.315 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:27:24.315 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:24.315 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:27:24.315 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.315 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:24.315 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:24.315 [2024-11-05 15:56:56.646570] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:24.315 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.315 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:24.315 "name": "Existed_Raid", 00:27:24.315 "aliases": [ 00:27:24.315 "2f727f81-5f92-4df2-a45a-ffc0cdcc95c8" 00:27:24.315 ], 00:27:24.315 "product_name": "Raid Volume", 00:27:24.315 "block_size": 512, 00:27:24.315 "num_blocks": 253952, 00:27:24.315 "uuid": "2f727f81-5f92-4df2-a45a-ffc0cdcc95c8", 00:27:24.315 "assigned_rate_limits": { 00:27:24.315 "rw_ios_per_sec": 0, 00:27:24.315 "rw_mbytes_per_sec": 0, 00:27:24.315 "r_mbytes_per_sec": 0, 00:27:24.315 "w_mbytes_per_sec": 0 00:27:24.315 }, 00:27:24.315 "claimed": false, 00:27:24.315 "zoned": false, 00:27:24.315 "supported_io_types": { 00:27:24.315 "read": true, 00:27:24.315 "write": true, 00:27:24.315 "unmap": true, 00:27:24.315 "flush": true, 00:27:24.315 "reset": true, 00:27:24.315 "nvme_admin": false, 00:27:24.315 "nvme_io": false, 00:27:24.315 "nvme_io_md": false, 00:27:24.315 "write_zeroes": true, 00:27:24.315 "zcopy": false, 00:27:24.315 "get_zone_info": false, 00:27:24.315 "zone_management": false, 00:27:24.315 "zone_append": false, 00:27:24.315 "compare": false, 00:27:24.315 "compare_and_write": false, 00:27:24.315 "abort": false, 00:27:24.315 "seek_hole": false, 00:27:24.315 "seek_data": false, 00:27:24.315 "copy": false, 00:27:24.315 "nvme_iov_md": false 00:27:24.315 }, 00:27:24.315 "memory_domains": [ 00:27:24.315 { 00:27:24.315 "dma_device_id": "system", 00:27:24.315 "dma_device_type": 1 00:27:24.315 }, 00:27:24.315 { 00:27:24.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:24.315 "dma_device_type": 2 00:27:24.315 }, 00:27:24.315 { 00:27:24.315 "dma_device_id": "system", 00:27:24.315 "dma_device_type": 1 00:27:24.315 }, 00:27:24.315 { 00:27:24.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:24.315 "dma_device_type": 2 00:27:24.315 }, 00:27:24.315 { 00:27:24.315 "dma_device_id": "system", 00:27:24.315 "dma_device_type": 1 00:27:24.315 }, 00:27:24.315 { 00:27:24.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:24.315 "dma_device_type": 2 00:27:24.315 }, 00:27:24.315 { 00:27:24.315 "dma_device_id": "system", 00:27:24.316 "dma_device_type": 1 00:27:24.316 }, 00:27:24.316 { 00:27:24.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:24.316 "dma_device_type": 2 00:27:24.316 } 00:27:24.316 ], 00:27:24.316 "driver_specific": { 00:27:24.316 "raid": { 00:27:24.316 "uuid": "2f727f81-5f92-4df2-a45a-ffc0cdcc95c8", 00:27:24.316 "strip_size_kb": 64, 00:27:24.316 "state": "online", 00:27:24.316 "raid_level": "raid0", 00:27:24.316 "superblock": true, 00:27:24.316 "num_base_bdevs": 4, 00:27:24.316 "num_base_bdevs_discovered": 4, 00:27:24.316 "num_base_bdevs_operational": 4, 00:27:24.316 "base_bdevs_list": [ 00:27:24.316 { 00:27:24.316 "name": "BaseBdev1", 00:27:24.316 "uuid": "a1d54252-488f-43b4-bc8e-a37a67c5a0ea", 00:27:24.316 "is_configured": true, 00:27:24.316 "data_offset": 2048, 00:27:24.316 "data_size": 63488 00:27:24.316 }, 00:27:24.316 { 00:27:24.316 "name": "BaseBdev2", 00:27:24.316 "uuid": "ab076896-04db-4856-aea1-25197596ff13", 00:27:24.316 "is_configured": true, 00:27:24.316 "data_offset": 2048, 00:27:24.316 "data_size": 63488 00:27:24.316 }, 00:27:24.316 { 00:27:24.316 "name": "BaseBdev3", 00:27:24.316 "uuid": "9166338f-fe5c-4e18-b0f4-7955c93ec7d4", 00:27:24.316 "is_configured": true, 00:27:24.316 "data_offset": 2048, 00:27:24.316 "data_size": 63488 00:27:24.316 }, 00:27:24.316 { 00:27:24.316 "name": "BaseBdev4", 00:27:24.316 "uuid": "50ea5bd9-fa5d-4c89-8aa9-d4c23fa204da", 00:27:24.316 "is_configured": true, 00:27:24.316 "data_offset": 2048, 00:27:24.316 "data_size": 63488 00:27:24.316 } 00:27:24.316 ] 00:27:24.316 } 00:27:24.316 } 00:27:24.316 }' 00:27:24.316 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:24.316 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:27:24.316 BaseBdev2 00:27:24.316 BaseBdev3 00:27:24.316 BaseBdev4' 00:27:24.316 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:24.575 [2024-11-05 15:56:56.862294] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:24.575 [2024-11-05 15:56:56.862320] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:24.575 [2024-11-05 15:56:56.862375] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:24.575 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:27:24.576 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:27:24.576 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:27:24.576 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:24.576 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:27:24.576 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:24.576 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:24.576 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:24.576 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:24.576 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:24.576 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:24.576 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:24.576 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:24.576 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:24.576 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.576 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:24.576 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.576 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:24.576 "name": "Existed_Raid", 00:27:24.576 "uuid": "2f727f81-5f92-4df2-a45a-ffc0cdcc95c8", 00:27:24.576 "strip_size_kb": 64, 00:27:24.576 "state": "offline", 00:27:24.576 "raid_level": "raid0", 00:27:24.576 "superblock": true, 00:27:24.576 "num_base_bdevs": 4, 00:27:24.576 "num_base_bdevs_discovered": 3, 00:27:24.576 "num_base_bdevs_operational": 3, 00:27:24.576 "base_bdevs_list": [ 00:27:24.576 { 00:27:24.576 "name": null, 00:27:24.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:24.576 "is_configured": false, 00:27:24.576 "data_offset": 0, 00:27:24.576 "data_size": 63488 00:27:24.576 }, 00:27:24.576 { 00:27:24.576 "name": "BaseBdev2", 00:27:24.576 "uuid": "ab076896-04db-4856-aea1-25197596ff13", 00:27:24.576 "is_configured": true, 00:27:24.576 "data_offset": 2048, 00:27:24.576 "data_size": 63488 00:27:24.576 }, 00:27:24.576 { 00:27:24.576 "name": "BaseBdev3", 00:27:24.576 "uuid": "9166338f-fe5c-4e18-b0f4-7955c93ec7d4", 00:27:24.576 "is_configured": true, 00:27:24.576 "data_offset": 2048, 00:27:24.576 "data_size": 63488 00:27:24.576 }, 00:27:24.576 { 00:27:24.576 "name": "BaseBdev4", 00:27:24.576 "uuid": "50ea5bd9-fa5d-4c89-8aa9-d4c23fa204da", 00:27:24.576 "is_configured": true, 00:27:24.576 "data_offset": 2048, 00:27:24.576 "data_size": 63488 00:27:24.576 } 00:27:24.576 ] 00:27:24.576 }' 00:27:24.576 15:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:24.576 15:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:24.834 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:27:24.834 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:24.834 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:24.834 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:24.834 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.834 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:25.090 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.090 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:25.090 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:25.090 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:27:25.090 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.090 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:25.090 [2024-11-05 15:56:57.272550] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:25.090 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.090 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:25.090 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:25.090 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:25.090 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.090 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:25.090 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:25.090 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.090 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:25.090 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:25.090 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:27:25.090 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.090 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:25.090 [2024-11-05 15:56:57.375046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:25.090 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.090 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:25.090 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:25.090 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:25.090 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.090 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:25.090 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:25.090 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.090 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:25.090 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:25.090 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:27:25.090 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.090 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:25.090 [2024-11-05 15:56:57.473484] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:27:25.090 [2024-11-05 15:56:57.473522] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:27:25.348 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.348 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:25.348 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:25.348 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:25.348 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.348 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:27:25.348 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:25.348 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.348 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:27:25.348 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:27:25.348 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:27:25.348 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:27:25.348 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:25.348 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:27:25.348 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.348 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:25.348 BaseBdev2 00:27:25.348 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.348 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:27:25.348 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:27:25.348 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:25.348 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:27:25.348 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:25.348 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:25.348 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:25.348 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.348 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:25.348 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.348 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:25.348 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.348 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:25.348 [ 00:27:25.348 { 00:27:25.348 "name": "BaseBdev2", 00:27:25.348 "aliases": [ 00:27:25.348 "21939b11-7844-45aa-a021-d31a5abe4c57" 00:27:25.348 ], 00:27:25.348 "product_name": "Malloc disk", 00:27:25.348 "block_size": 512, 00:27:25.348 "num_blocks": 65536, 00:27:25.348 "uuid": "21939b11-7844-45aa-a021-d31a5abe4c57", 00:27:25.348 "assigned_rate_limits": { 00:27:25.348 "rw_ios_per_sec": 0, 00:27:25.348 "rw_mbytes_per_sec": 0, 00:27:25.349 "r_mbytes_per_sec": 0, 00:27:25.349 "w_mbytes_per_sec": 0 00:27:25.349 }, 00:27:25.349 "claimed": false, 00:27:25.349 "zoned": false, 00:27:25.349 "supported_io_types": { 00:27:25.349 "read": true, 00:27:25.349 "write": true, 00:27:25.349 "unmap": true, 00:27:25.349 "flush": true, 00:27:25.349 "reset": true, 00:27:25.349 "nvme_admin": false, 00:27:25.349 "nvme_io": false, 00:27:25.349 "nvme_io_md": false, 00:27:25.349 "write_zeroes": true, 00:27:25.349 "zcopy": true, 00:27:25.349 "get_zone_info": false, 00:27:25.349 "zone_management": false, 00:27:25.349 "zone_append": false, 00:27:25.349 "compare": false, 00:27:25.349 "compare_and_write": false, 00:27:25.349 "abort": true, 00:27:25.349 "seek_hole": false, 00:27:25.349 "seek_data": false, 00:27:25.349 "copy": true, 00:27:25.349 "nvme_iov_md": false 00:27:25.349 }, 00:27:25.349 "memory_domains": [ 00:27:25.349 { 00:27:25.349 "dma_device_id": "system", 00:27:25.349 "dma_device_type": 1 00:27:25.349 }, 00:27:25.349 { 00:27:25.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:25.349 "dma_device_type": 2 00:27:25.349 } 00:27:25.349 ], 00:27:25.349 "driver_specific": {} 00:27:25.349 } 00:27:25.349 ] 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:25.349 BaseBdev3 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:25.349 [ 00:27:25.349 { 00:27:25.349 "name": "BaseBdev3", 00:27:25.349 "aliases": [ 00:27:25.349 "0b254371-fa5c-46af-8ca4-5cd1e84734aa" 00:27:25.349 ], 00:27:25.349 "product_name": "Malloc disk", 00:27:25.349 "block_size": 512, 00:27:25.349 "num_blocks": 65536, 00:27:25.349 "uuid": "0b254371-fa5c-46af-8ca4-5cd1e84734aa", 00:27:25.349 "assigned_rate_limits": { 00:27:25.349 "rw_ios_per_sec": 0, 00:27:25.349 "rw_mbytes_per_sec": 0, 00:27:25.349 "r_mbytes_per_sec": 0, 00:27:25.349 "w_mbytes_per_sec": 0 00:27:25.349 }, 00:27:25.349 "claimed": false, 00:27:25.349 "zoned": false, 00:27:25.349 "supported_io_types": { 00:27:25.349 "read": true, 00:27:25.349 "write": true, 00:27:25.349 "unmap": true, 00:27:25.349 "flush": true, 00:27:25.349 "reset": true, 00:27:25.349 "nvme_admin": false, 00:27:25.349 "nvme_io": false, 00:27:25.349 "nvme_io_md": false, 00:27:25.349 "write_zeroes": true, 00:27:25.349 "zcopy": true, 00:27:25.349 "get_zone_info": false, 00:27:25.349 "zone_management": false, 00:27:25.349 "zone_append": false, 00:27:25.349 "compare": false, 00:27:25.349 "compare_and_write": false, 00:27:25.349 "abort": true, 00:27:25.349 "seek_hole": false, 00:27:25.349 "seek_data": false, 00:27:25.349 "copy": true, 00:27:25.349 "nvme_iov_md": false 00:27:25.349 }, 00:27:25.349 "memory_domains": [ 00:27:25.349 { 00:27:25.349 "dma_device_id": "system", 00:27:25.349 "dma_device_type": 1 00:27:25.349 }, 00:27:25.349 { 00:27:25.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:25.349 "dma_device_type": 2 00:27:25.349 } 00:27:25.349 ], 00:27:25.349 "driver_specific": {} 00:27:25.349 } 00:27:25.349 ] 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:25.349 BaseBdev4 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:25.349 [ 00:27:25.349 { 00:27:25.349 "name": "BaseBdev4", 00:27:25.349 "aliases": [ 00:27:25.349 "d9164956-5110-4812-a2f5-14c2b7c524a1" 00:27:25.349 ], 00:27:25.349 "product_name": "Malloc disk", 00:27:25.349 "block_size": 512, 00:27:25.349 "num_blocks": 65536, 00:27:25.349 "uuid": "d9164956-5110-4812-a2f5-14c2b7c524a1", 00:27:25.349 "assigned_rate_limits": { 00:27:25.349 "rw_ios_per_sec": 0, 00:27:25.349 "rw_mbytes_per_sec": 0, 00:27:25.349 "r_mbytes_per_sec": 0, 00:27:25.349 "w_mbytes_per_sec": 0 00:27:25.349 }, 00:27:25.349 "claimed": false, 00:27:25.349 "zoned": false, 00:27:25.349 "supported_io_types": { 00:27:25.349 "read": true, 00:27:25.349 "write": true, 00:27:25.349 "unmap": true, 00:27:25.349 "flush": true, 00:27:25.349 "reset": true, 00:27:25.349 "nvme_admin": false, 00:27:25.349 "nvme_io": false, 00:27:25.349 "nvme_io_md": false, 00:27:25.349 "write_zeroes": true, 00:27:25.349 "zcopy": true, 00:27:25.349 "get_zone_info": false, 00:27:25.349 "zone_management": false, 00:27:25.349 "zone_append": false, 00:27:25.349 "compare": false, 00:27:25.349 "compare_and_write": false, 00:27:25.349 "abort": true, 00:27:25.349 "seek_hole": false, 00:27:25.349 "seek_data": false, 00:27:25.349 "copy": true, 00:27:25.349 "nvme_iov_md": false 00:27:25.349 }, 00:27:25.349 "memory_domains": [ 00:27:25.349 { 00:27:25.349 "dma_device_id": "system", 00:27:25.349 "dma_device_type": 1 00:27:25.349 }, 00:27:25.349 { 00:27:25.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:25.349 "dma_device_type": 2 00:27:25.349 } 00:27:25.349 ], 00:27:25.349 "driver_specific": {} 00:27:25.349 } 00:27:25.349 ] 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.349 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:25.349 [2024-11-05 15:56:57.734889] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:25.349 [2024-11-05 15:56:57.735024] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:25.349 [2024-11-05 15:56:57.735096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:25.350 [2024-11-05 15:56:57.736952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:25.350 [2024-11-05 15:56:57.737079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:25.350 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.350 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:27:25.350 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:25.350 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:25.350 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:25.350 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:25.350 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:25.350 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:25.350 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:25.350 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:25.350 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:25.350 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:25.350 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:25.350 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.350 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:25.350 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.608 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:25.608 "name": "Existed_Raid", 00:27:25.608 "uuid": "7f76c13f-1bc4-4607-b4d9-4d383f9b0129", 00:27:25.608 "strip_size_kb": 64, 00:27:25.608 "state": "configuring", 00:27:25.608 "raid_level": "raid0", 00:27:25.608 "superblock": true, 00:27:25.608 "num_base_bdevs": 4, 00:27:25.608 "num_base_bdevs_discovered": 3, 00:27:25.608 "num_base_bdevs_operational": 4, 00:27:25.608 "base_bdevs_list": [ 00:27:25.608 { 00:27:25.608 "name": "BaseBdev1", 00:27:25.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:25.608 "is_configured": false, 00:27:25.608 "data_offset": 0, 00:27:25.608 "data_size": 0 00:27:25.608 }, 00:27:25.608 { 00:27:25.608 "name": "BaseBdev2", 00:27:25.608 "uuid": "21939b11-7844-45aa-a021-d31a5abe4c57", 00:27:25.608 "is_configured": true, 00:27:25.608 "data_offset": 2048, 00:27:25.608 "data_size": 63488 00:27:25.608 }, 00:27:25.608 { 00:27:25.608 "name": "BaseBdev3", 00:27:25.608 "uuid": "0b254371-fa5c-46af-8ca4-5cd1e84734aa", 00:27:25.608 "is_configured": true, 00:27:25.608 "data_offset": 2048, 00:27:25.608 "data_size": 63488 00:27:25.608 }, 00:27:25.608 { 00:27:25.608 "name": "BaseBdev4", 00:27:25.608 "uuid": "d9164956-5110-4812-a2f5-14c2b7c524a1", 00:27:25.608 "is_configured": true, 00:27:25.608 "data_offset": 2048, 00:27:25.608 "data_size": 63488 00:27:25.608 } 00:27:25.608 ] 00:27:25.608 }' 00:27:25.608 15:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:25.608 15:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:25.866 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:27:25.866 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.866 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:25.866 [2024-11-05 15:56:58.062944] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:25.866 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.866 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:27:25.866 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:25.866 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:25.866 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:25.866 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:25.866 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:25.866 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:25.866 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:25.866 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:25.866 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:25.866 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:25.866 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.866 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:25.866 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:25.867 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.867 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:25.867 "name": "Existed_Raid", 00:27:25.867 "uuid": "7f76c13f-1bc4-4607-b4d9-4d383f9b0129", 00:27:25.867 "strip_size_kb": 64, 00:27:25.867 "state": "configuring", 00:27:25.867 "raid_level": "raid0", 00:27:25.867 "superblock": true, 00:27:25.867 "num_base_bdevs": 4, 00:27:25.867 "num_base_bdevs_discovered": 2, 00:27:25.867 "num_base_bdevs_operational": 4, 00:27:25.867 "base_bdevs_list": [ 00:27:25.867 { 00:27:25.867 "name": "BaseBdev1", 00:27:25.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:25.867 "is_configured": false, 00:27:25.867 "data_offset": 0, 00:27:25.867 "data_size": 0 00:27:25.867 }, 00:27:25.867 { 00:27:25.867 "name": null, 00:27:25.867 "uuid": "21939b11-7844-45aa-a021-d31a5abe4c57", 00:27:25.867 "is_configured": false, 00:27:25.867 "data_offset": 0, 00:27:25.867 "data_size": 63488 00:27:25.867 }, 00:27:25.867 { 00:27:25.867 "name": "BaseBdev3", 00:27:25.867 "uuid": "0b254371-fa5c-46af-8ca4-5cd1e84734aa", 00:27:25.867 "is_configured": true, 00:27:25.867 "data_offset": 2048, 00:27:25.867 "data_size": 63488 00:27:25.867 }, 00:27:25.867 { 00:27:25.867 "name": "BaseBdev4", 00:27:25.867 "uuid": "d9164956-5110-4812-a2f5-14c2b7c524a1", 00:27:25.867 "is_configured": true, 00:27:25.867 "data_offset": 2048, 00:27:25.867 "data_size": 63488 00:27:25.867 } 00:27:25.867 ] 00:27:25.867 }' 00:27:25.867 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:25.867 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:26.126 [2024-11-05 15:56:58.429194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:26.126 BaseBdev1 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:26.126 [ 00:27:26.126 { 00:27:26.126 "name": "BaseBdev1", 00:27:26.126 "aliases": [ 00:27:26.126 "379eb90b-991b-4766-be37-75bea4a92220" 00:27:26.126 ], 00:27:26.126 "product_name": "Malloc disk", 00:27:26.126 "block_size": 512, 00:27:26.126 "num_blocks": 65536, 00:27:26.126 "uuid": "379eb90b-991b-4766-be37-75bea4a92220", 00:27:26.126 "assigned_rate_limits": { 00:27:26.126 "rw_ios_per_sec": 0, 00:27:26.126 "rw_mbytes_per_sec": 0, 00:27:26.126 "r_mbytes_per_sec": 0, 00:27:26.126 "w_mbytes_per_sec": 0 00:27:26.126 }, 00:27:26.126 "claimed": true, 00:27:26.126 "claim_type": "exclusive_write", 00:27:26.126 "zoned": false, 00:27:26.126 "supported_io_types": { 00:27:26.126 "read": true, 00:27:26.126 "write": true, 00:27:26.126 "unmap": true, 00:27:26.126 "flush": true, 00:27:26.126 "reset": true, 00:27:26.126 "nvme_admin": false, 00:27:26.126 "nvme_io": false, 00:27:26.126 "nvme_io_md": false, 00:27:26.126 "write_zeroes": true, 00:27:26.126 "zcopy": true, 00:27:26.126 "get_zone_info": false, 00:27:26.126 "zone_management": false, 00:27:26.126 "zone_append": false, 00:27:26.126 "compare": false, 00:27:26.126 "compare_and_write": false, 00:27:26.126 "abort": true, 00:27:26.126 "seek_hole": false, 00:27:26.126 "seek_data": false, 00:27:26.126 "copy": true, 00:27:26.126 "nvme_iov_md": false 00:27:26.126 }, 00:27:26.126 "memory_domains": [ 00:27:26.126 { 00:27:26.126 "dma_device_id": "system", 00:27:26.126 "dma_device_type": 1 00:27:26.126 }, 00:27:26.126 { 00:27:26.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:26.126 "dma_device_type": 2 00:27:26.126 } 00:27:26.126 ], 00:27:26.126 "driver_specific": {} 00:27:26.126 } 00:27:26.126 ] 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.126 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:26.127 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.127 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:26.127 "name": "Existed_Raid", 00:27:26.127 "uuid": "7f76c13f-1bc4-4607-b4d9-4d383f9b0129", 00:27:26.127 "strip_size_kb": 64, 00:27:26.127 "state": "configuring", 00:27:26.127 "raid_level": "raid0", 00:27:26.127 "superblock": true, 00:27:26.127 "num_base_bdevs": 4, 00:27:26.127 "num_base_bdevs_discovered": 3, 00:27:26.127 "num_base_bdevs_operational": 4, 00:27:26.127 "base_bdevs_list": [ 00:27:26.127 { 00:27:26.127 "name": "BaseBdev1", 00:27:26.127 "uuid": "379eb90b-991b-4766-be37-75bea4a92220", 00:27:26.127 "is_configured": true, 00:27:26.127 "data_offset": 2048, 00:27:26.127 "data_size": 63488 00:27:26.127 }, 00:27:26.127 { 00:27:26.127 "name": null, 00:27:26.127 "uuid": "21939b11-7844-45aa-a021-d31a5abe4c57", 00:27:26.127 "is_configured": false, 00:27:26.127 "data_offset": 0, 00:27:26.127 "data_size": 63488 00:27:26.127 }, 00:27:26.127 { 00:27:26.127 "name": "BaseBdev3", 00:27:26.127 "uuid": "0b254371-fa5c-46af-8ca4-5cd1e84734aa", 00:27:26.127 "is_configured": true, 00:27:26.127 "data_offset": 2048, 00:27:26.127 "data_size": 63488 00:27:26.127 }, 00:27:26.127 { 00:27:26.127 "name": "BaseBdev4", 00:27:26.127 "uuid": "d9164956-5110-4812-a2f5-14c2b7c524a1", 00:27:26.127 "is_configured": true, 00:27:26.127 "data_offset": 2048, 00:27:26.127 "data_size": 63488 00:27:26.127 } 00:27:26.127 ] 00:27:26.127 }' 00:27:26.127 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:26.127 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:26.386 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:26.386 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:26.386 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.387 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:26.387 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.387 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:27:26.387 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:27:26.387 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.387 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:26.675 [2024-11-05 15:56:58.805313] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:26.675 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.675 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:27:26.675 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:26.675 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:26.675 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:26.675 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:26.675 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:26.675 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:26.675 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:26.675 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:26.675 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:26.675 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:26.675 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.675 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:26.676 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:26.676 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.676 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:26.676 "name": "Existed_Raid", 00:27:26.676 "uuid": "7f76c13f-1bc4-4607-b4d9-4d383f9b0129", 00:27:26.676 "strip_size_kb": 64, 00:27:26.676 "state": "configuring", 00:27:26.676 "raid_level": "raid0", 00:27:26.676 "superblock": true, 00:27:26.676 "num_base_bdevs": 4, 00:27:26.676 "num_base_bdevs_discovered": 2, 00:27:26.676 "num_base_bdevs_operational": 4, 00:27:26.676 "base_bdevs_list": [ 00:27:26.676 { 00:27:26.676 "name": "BaseBdev1", 00:27:26.676 "uuid": "379eb90b-991b-4766-be37-75bea4a92220", 00:27:26.676 "is_configured": true, 00:27:26.676 "data_offset": 2048, 00:27:26.676 "data_size": 63488 00:27:26.676 }, 00:27:26.676 { 00:27:26.676 "name": null, 00:27:26.676 "uuid": "21939b11-7844-45aa-a021-d31a5abe4c57", 00:27:26.676 "is_configured": false, 00:27:26.676 "data_offset": 0, 00:27:26.676 "data_size": 63488 00:27:26.676 }, 00:27:26.676 { 00:27:26.676 "name": null, 00:27:26.676 "uuid": "0b254371-fa5c-46af-8ca4-5cd1e84734aa", 00:27:26.676 "is_configured": false, 00:27:26.676 "data_offset": 0, 00:27:26.676 "data_size": 63488 00:27:26.676 }, 00:27:26.676 { 00:27:26.676 "name": "BaseBdev4", 00:27:26.676 "uuid": "d9164956-5110-4812-a2f5-14c2b7c524a1", 00:27:26.676 "is_configured": true, 00:27:26.676 "data_offset": 2048, 00:27:26.676 "data_size": 63488 00:27:26.676 } 00:27:26.676 ] 00:27:26.676 }' 00:27:26.676 15:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:26.676 15:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:26.935 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:26.935 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:26.935 15:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.935 15:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:26.935 15:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.935 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:27:26.935 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:27:26.935 15:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.935 15:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:26.935 [2024-11-05 15:56:59.153386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:26.935 15:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.935 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:27:26.935 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:26.935 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:26.935 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:26.935 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:26.935 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:26.935 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:26.935 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:26.935 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:26.935 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:26.935 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:26.935 15:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.935 15:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:26.935 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:26.935 15:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.935 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:26.935 "name": "Existed_Raid", 00:27:26.935 "uuid": "7f76c13f-1bc4-4607-b4d9-4d383f9b0129", 00:27:26.935 "strip_size_kb": 64, 00:27:26.935 "state": "configuring", 00:27:26.935 "raid_level": "raid0", 00:27:26.935 "superblock": true, 00:27:26.935 "num_base_bdevs": 4, 00:27:26.935 "num_base_bdevs_discovered": 3, 00:27:26.935 "num_base_bdevs_operational": 4, 00:27:26.935 "base_bdevs_list": [ 00:27:26.935 { 00:27:26.935 "name": "BaseBdev1", 00:27:26.935 "uuid": "379eb90b-991b-4766-be37-75bea4a92220", 00:27:26.935 "is_configured": true, 00:27:26.935 "data_offset": 2048, 00:27:26.935 "data_size": 63488 00:27:26.935 }, 00:27:26.935 { 00:27:26.935 "name": null, 00:27:26.935 "uuid": "21939b11-7844-45aa-a021-d31a5abe4c57", 00:27:26.935 "is_configured": false, 00:27:26.935 "data_offset": 0, 00:27:26.935 "data_size": 63488 00:27:26.935 }, 00:27:26.935 { 00:27:26.935 "name": "BaseBdev3", 00:27:26.935 "uuid": "0b254371-fa5c-46af-8ca4-5cd1e84734aa", 00:27:26.935 "is_configured": true, 00:27:26.935 "data_offset": 2048, 00:27:26.935 "data_size": 63488 00:27:26.935 }, 00:27:26.935 { 00:27:26.935 "name": "BaseBdev4", 00:27:26.935 "uuid": "d9164956-5110-4812-a2f5-14c2b7c524a1", 00:27:26.935 "is_configured": true, 00:27:26.935 "data_offset": 2048, 00:27:26.935 "data_size": 63488 00:27:26.935 } 00:27:26.935 ] 00:27:26.935 }' 00:27:26.935 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:26.935 15:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:27.195 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:27.195 15:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.195 15:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:27.195 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:27.195 15:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.195 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:27:27.195 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:27:27.195 15:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.195 15:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:27.195 [2024-11-05 15:56:59.485496] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:27.195 15:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.195 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:27:27.195 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:27.195 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:27.195 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:27.195 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:27.195 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:27.195 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:27.195 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:27.195 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:27.195 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:27.195 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:27.195 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:27.195 15:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.195 15:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:27.195 15:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.195 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:27.195 "name": "Existed_Raid", 00:27:27.195 "uuid": "7f76c13f-1bc4-4607-b4d9-4d383f9b0129", 00:27:27.195 "strip_size_kb": 64, 00:27:27.195 "state": "configuring", 00:27:27.195 "raid_level": "raid0", 00:27:27.195 "superblock": true, 00:27:27.195 "num_base_bdevs": 4, 00:27:27.195 "num_base_bdevs_discovered": 2, 00:27:27.195 "num_base_bdevs_operational": 4, 00:27:27.195 "base_bdevs_list": [ 00:27:27.195 { 00:27:27.195 "name": null, 00:27:27.195 "uuid": "379eb90b-991b-4766-be37-75bea4a92220", 00:27:27.195 "is_configured": false, 00:27:27.195 "data_offset": 0, 00:27:27.195 "data_size": 63488 00:27:27.195 }, 00:27:27.195 { 00:27:27.195 "name": null, 00:27:27.195 "uuid": "21939b11-7844-45aa-a021-d31a5abe4c57", 00:27:27.195 "is_configured": false, 00:27:27.195 "data_offset": 0, 00:27:27.195 "data_size": 63488 00:27:27.195 }, 00:27:27.195 { 00:27:27.195 "name": "BaseBdev3", 00:27:27.195 "uuid": "0b254371-fa5c-46af-8ca4-5cd1e84734aa", 00:27:27.195 "is_configured": true, 00:27:27.195 "data_offset": 2048, 00:27:27.195 "data_size": 63488 00:27:27.195 }, 00:27:27.195 { 00:27:27.195 "name": "BaseBdev4", 00:27:27.195 "uuid": "d9164956-5110-4812-a2f5-14c2b7c524a1", 00:27:27.195 "is_configured": true, 00:27:27.195 "data_offset": 2048, 00:27:27.195 "data_size": 63488 00:27:27.195 } 00:27:27.195 ] 00:27:27.195 }' 00:27:27.195 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:27.195 15:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:27.454 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:27.454 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:27.454 15:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.454 15:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:27.454 15:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.454 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:27:27.454 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:27:27.454 15:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.454 15:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:27.454 [2024-11-05 15:56:59.867792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:27.714 15:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.714 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:27:27.714 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:27.714 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:27.714 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:27.714 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:27.714 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:27.714 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:27.714 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:27.714 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:27.714 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:27.714 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:27.714 15:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.714 15:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:27.714 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:27.714 15:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.714 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:27.714 "name": "Existed_Raid", 00:27:27.714 "uuid": "7f76c13f-1bc4-4607-b4d9-4d383f9b0129", 00:27:27.714 "strip_size_kb": 64, 00:27:27.714 "state": "configuring", 00:27:27.714 "raid_level": "raid0", 00:27:27.714 "superblock": true, 00:27:27.714 "num_base_bdevs": 4, 00:27:27.714 "num_base_bdevs_discovered": 3, 00:27:27.714 "num_base_bdevs_operational": 4, 00:27:27.714 "base_bdevs_list": [ 00:27:27.714 { 00:27:27.714 "name": null, 00:27:27.714 "uuid": "379eb90b-991b-4766-be37-75bea4a92220", 00:27:27.714 "is_configured": false, 00:27:27.714 "data_offset": 0, 00:27:27.714 "data_size": 63488 00:27:27.714 }, 00:27:27.714 { 00:27:27.714 "name": "BaseBdev2", 00:27:27.714 "uuid": "21939b11-7844-45aa-a021-d31a5abe4c57", 00:27:27.714 "is_configured": true, 00:27:27.714 "data_offset": 2048, 00:27:27.714 "data_size": 63488 00:27:27.714 }, 00:27:27.714 { 00:27:27.714 "name": "BaseBdev3", 00:27:27.714 "uuid": "0b254371-fa5c-46af-8ca4-5cd1e84734aa", 00:27:27.714 "is_configured": true, 00:27:27.714 "data_offset": 2048, 00:27:27.714 "data_size": 63488 00:27:27.714 }, 00:27:27.714 { 00:27:27.714 "name": "BaseBdev4", 00:27:27.714 "uuid": "d9164956-5110-4812-a2f5-14c2b7c524a1", 00:27:27.714 "is_configured": true, 00:27:27.714 "data_offset": 2048, 00:27:27.714 "data_size": 63488 00:27:27.714 } 00:27:27.714 ] 00:27:27.714 }' 00:27:27.714 15:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:27.714 15:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:27.973 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:27.973 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.973 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:27.973 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:27.973 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.973 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 379eb90b-991b-4766-be37-75bea4a92220 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:27.974 [2024-11-05 15:57:00.294151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:27:27.974 [2024-11-05 15:57:00.294316] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:27:27.974 [2024-11-05 15:57:00.294326] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:27.974 [2024-11-05 15:57:00.294555] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:27:27.974 NewBaseBdev 00:27:27.974 [2024-11-05 15:57:00.294656] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:27:27.974 [2024-11-05 15:57:00.294665] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:27:27.974 [2024-11-05 15:57:00.294757] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:27.974 [ 00:27:27.974 { 00:27:27.974 "name": "NewBaseBdev", 00:27:27.974 "aliases": [ 00:27:27.974 "379eb90b-991b-4766-be37-75bea4a92220" 00:27:27.974 ], 00:27:27.974 "product_name": "Malloc disk", 00:27:27.974 "block_size": 512, 00:27:27.974 "num_blocks": 65536, 00:27:27.974 "uuid": "379eb90b-991b-4766-be37-75bea4a92220", 00:27:27.974 "assigned_rate_limits": { 00:27:27.974 "rw_ios_per_sec": 0, 00:27:27.974 "rw_mbytes_per_sec": 0, 00:27:27.974 "r_mbytes_per_sec": 0, 00:27:27.974 "w_mbytes_per_sec": 0 00:27:27.974 }, 00:27:27.974 "claimed": true, 00:27:27.974 "claim_type": "exclusive_write", 00:27:27.974 "zoned": false, 00:27:27.974 "supported_io_types": { 00:27:27.974 "read": true, 00:27:27.974 "write": true, 00:27:27.974 "unmap": true, 00:27:27.974 "flush": true, 00:27:27.974 "reset": true, 00:27:27.974 "nvme_admin": false, 00:27:27.974 "nvme_io": false, 00:27:27.974 "nvme_io_md": false, 00:27:27.974 "write_zeroes": true, 00:27:27.974 "zcopy": true, 00:27:27.974 "get_zone_info": false, 00:27:27.974 "zone_management": false, 00:27:27.974 "zone_append": false, 00:27:27.974 "compare": false, 00:27:27.974 "compare_and_write": false, 00:27:27.974 "abort": true, 00:27:27.974 "seek_hole": false, 00:27:27.974 "seek_data": false, 00:27:27.974 "copy": true, 00:27:27.974 "nvme_iov_md": false 00:27:27.974 }, 00:27:27.974 "memory_domains": [ 00:27:27.974 { 00:27:27.974 "dma_device_id": "system", 00:27:27.974 "dma_device_type": 1 00:27:27.974 }, 00:27:27.974 { 00:27:27.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:27.974 "dma_device_type": 2 00:27:27.974 } 00:27:27.974 ], 00:27:27.974 "driver_specific": {} 00:27:27.974 } 00:27:27.974 ] 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.974 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:27.974 "name": "Existed_Raid", 00:27:27.974 "uuid": "7f76c13f-1bc4-4607-b4d9-4d383f9b0129", 00:27:27.974 "strip_size_kb": 64, 00:27:27.974 "state": "online", 00:27:27.974 "raid_level": "raid0", 00:27:27.974 "superblock": true, 00:27:27.974 "num_base_bdevs": 4, 00:27:27.974 "num_base_bdevs_discovered": 4, 00:27:27.974 "num_base_bdevs_operational": 4, 00:27:27.974 "base_bdevs_list": [ 00:27:27.974 { 00:27:27.974 "name": "NewBaseBdev", 00:27:27.974 "uuid": "379eb90b-991b-4766-be37-75bea4a92220", 00:27:27.974 "is_configured": true, 00:27:27.974 "data_offset": 2048, 00:27:27.974 "data_size": 63488 00:27:27.974 }, 00:27:27.974 { 00:27:27.974 "name": "BaseBdev2", 00:27:27.974 "uuid": "21939b11-7844-45aa-a021-d31a5abe4c57", 00:27:27.974 "is_configured": true, 00:27:27.974 "data_offset": 2048, 00:27:27.974 "data_size": 63488 00:27:27.974 }, 00:27:27.974 { 00:27:27.974 "name": "BaseBdev3", 00:27:27.974 "uuid": "0b254371-fa5c-46af-8ca4-5cd1e84734aa", 00:27:27.974 "is_configured": true, 00:27:27.974 "data_offset": 2048, 00:27:27.974 "data_size": 63488 00:27:27.974 }, 00:27:27.975 { 00:27:27.975 "name": "BaseBdev4", 00:27:27.975 "uuid": "d9164956-5110-4812-a2f5-14c2b7c524a1", 00:27:27.975 "is_configured": true, 00:27:27.975 "data_offset": 2048, 00:27:27.975 "data_size": 63488 00:27:27.975 } 00:27:27.975 ] 00:27:27.975 }' 00:27:27.975 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:27.975 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:28.233 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:27:28.233 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:27:28.233 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:28.233 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:28.233 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:27:28.233 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:28.233 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:27:28.233 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.233 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:28.233 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:28.233 [2024-11-05 15:57:00.642578] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:28.491 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.491 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:28.491 "name": "Existed_Raid", 00:27:28.491 "aliases": [ 00:27:28.491 "7f76c13f-1bc4-4607-b4d9-4d383f9b0129" 00:27:28.491 ], 00:27:28.491 "product_name": "Raid Volume", 00:27:28.491 "block_size": 512, 00:27:28.491 "num_blocks": 253952, 00:27:28.491 "uuid": "7f76c13f-1bc4-4607-b4d9-4d383f9b0129", 00:27:28.491 "assigned_rate_limits": { 00:27:28.491 "rw_ios_per_sec": 0, 00:27:28.491 "rw_mbytes_per_sec": 0, 00:27:28.491 "r_mbytes_per_sec": 0, 00:27:28.491 "w_mbytes_per_sec": 0 00:27:28.491 }, 00:27:28.491 "claimed": false, 00:27:28.491 "zoned": false, 00:27:28.491 "supported_io_types": { 00:27:28.491 "read": true, 00:27:28.491 "write": true, 00:27:28.491 "unmap": true, 00:27:28.491 "flush": true, 00:27:28.491 "reset": true, 00:27:28.491 "nvme_admin": false, 00:27:28.491 "nvme_io": false, 00:27:28.491 "nvme_io_md": false, 00:27:28.491 "write_zeroes": true, 00:27:28.491 "zcopy": false, 00:27:28.491 "get_zone_info": false, 00:27:28.491 "zone_management": false, 00:27:28.491 "zone_append": false, 00:27:28.491 "compare": false, 00:27:28.491 "compare_and_write": false, 00:27:28.491 "abort": false, 00:27:28.491 "seek_hole": false, 00:27:28.491 "seek_data": false, 00:27:28.491 "copy": false, 00:27:28.491 "nvme_iov_md": false 00:27:28.491 }, 00:27:28.491 "memory_domains": [ 00:27:28.491 { 00:27:28.491 "dma_device_id": "system", 00:27:28.491 "dma_device_type": 1 00:27:28.491 }, 00:27:28.491 { 00:27:28.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:28.491 "dma_device_type": 2 00:27:28.491 }, 00:27:28.491 { 00:27:28.491 "dma_device_id": "system", 00:27:28.491 "dma_device_type": 1 00:27:28.491 }, 00:27:28.491 { 00:27:28.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:28.491 "dma_device_type": 2 00:27:28.491 }, 00:27:28.491 { 00:27:28.491 "dma_device_id": "system", 00:27:28.491 "dma_device_type": 1 00:27:28.491 }, 00:27:28.491 { 00:27:28.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:28.491 "dma_device_type": 2 00:27:28.491 }, 00:27:28.491 { 00:27:28.491 "dma_device_id": "system", 00:27:28.491 "dma_device_type": 1 00:27:28.491 }, 00:27:28.491 { 00:27:28.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:28.492 "dma_device_type": 2 00:27:28.492 } 00:27:28.492 ], 00:27:28.492 "driver_specific": { 00:27:28.492 "raid": { 00:27:28.492 "uuid": "7f76c13f-1bc4-4607-b4d9-4d383f9b0129", 00:27:28.492 "strip_size_kb": 64, 00:27:28.492 "state": "online", 00:27:28.492 "raid_level": "raid0", 00:27:28.492 "superblock": true, 00:27:28.492 "num_base_bdevs": 4, 00:27:28.492 "num_base_bdevs_discovered": 4, 00:27:28.492 "num_base_bdevs_operational": 4, 00:27:28.492 "base_bdevs_list": [ 00:27:28.492 { 00:27:28.492 "name": "NewBaseBdev", 00:27:28.492 "uuid": "379eb90b-991b-4766-be37-75bea4a92220", 00:27:28.492 "is_configured": true, 00:27:28.492 "data_offset": 2048, 00:27:28.492 "data_size": 63488 00:27:28.492 }, 00:27:28.492 { 00:27:28.492 "name": "BaseBdev2", 00:27:28.492 "uuid": "21939b11-7844-45aa-a021-d31a5abe4c57", 00:27:28.492 "is_configured": true, 00:27:28.492 "data_offset": 2048, 00:27:28.492 "data_size": 63488 00:27:28.492 }, 00:27:28.492 { 00:27:28.492 "name": "BaseBdev3", 00:27:28.492 "uuid": "0b254371-fa5c-46af-8ca4-5cd1e84734aa", 00:27:28.492 "is_configured": true, 00:27:28.492 "data_offset": 2048, 00:27:28.492 "data_size": 63488 00:27:28.492 }, 00:27:28.492 { 00:27:28.492 "name": "BaseBdev4", 00:27:28.492 "uuid": "d9164956-5110-4812-a2f5-14c2b7c524a1", 00:27:28.492 "is_configured": true, 00:27:28.492 "data_offset": 2048, 00:27:28.492 "data_size": 63488 00:27:28.492 } 00:27:28.492 ] 00:27:28.492 } 00:27:28.492 } 00:27:28.492 }' 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:27:28.492 BaseBdev2 00:27:28.492 BaseBdev3 00:27:28.492 BaseBdev4' 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:28.492 [2024-11-05 15:57:00.858304] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:28.492 [2024-11-05 15:57:00.858420] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:28.492 [2024-11-05 15:57:00.858486] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:28.492 [2024-11-05 15:57:00.858542] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:28.492 [2024-11-05 15:57:00.858550] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68102 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 68102 ']' 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 68102 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68102 00:27:28.492 killing process with pid 68102 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68102' 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 68102 00:27:28.492 [2024-11-05 15:57:00.886702] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:28.492 15:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 68102 00:27:28.750 [2024-11-05 15:57:01.081606] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:29.315 15:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:27:29.315 00:27:29.315 real 0m8.120s 00:27:29.315 user 0m13.076s 00:27:29.315 sys 0m1.294s 00:27:29.315 15:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:29.315 15:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:29.315 ************************************ 00:27:29.315 END TEST raid_state_function_test_sb 00:27:29.315 ************************************ 00:27:29.315 15:57:01 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:27:29.315 15:57:01 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:29.315 15:57:01 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:29.315 15:57:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:29.315 ************************************ 00:27:29.315 START TEST raid_superblock_test 00:27:29.315 ************************************ 00:27:29.315 15:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 4 00:27:29.315 15:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:27:29.315 15:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:27:29.315 15:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:27:29.315 15:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:27:29.315 15:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:27:29.315 15:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:27:29.315 15:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:27:29.315 15:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:27:29.315 15:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:27:29.315 15:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:27:29.315 15:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:27:29.315 15:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:27:29.315 15:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:27:29.315 15:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:27:29.315 15:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:27:29.315 15:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:27:29.315 15:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68739 00:27:29.315 15:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68739 00:27:29.315 15:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 68739 ']' 00:27:29.315 15:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:29.315 15:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:29.315 15:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:29.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:29.316 15:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:27:29.316 15:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:29.316 15:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:29.573 [2024-11-05 15:57:01.750500] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:27:29.573 [2024-11-05 15:57:01.750592] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68739 ] 00:27:29.573 [2024-11-05 15:57:01.899861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.573 [2024-11-05 15:57:01.983059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.832 [2024-11-05 15:57:02.090611] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:29.832 [2024-11-05 15:57:02.090645] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:30.426 15:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:30.426 15:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:27:30.426 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:27:30.426 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:30.426 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:27:30.426 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:27:30.426 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:27:30.426 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:30.426 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:30.426 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:30.426 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:27:30.426 15:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.426 15:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:30.426 malloc1 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:30.427 [2024-11-05 15:57:02.586239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:30.427 [2024-11-05 15:57:02.586293] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:30.427 [2024-11-05 15:57:02.586311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:27:30.427 [2024-11-05 15:57:02.586318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:30.427 [2024-11-05 15:57:02.588061] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:30.427 [2024-11-05 15:57:02.588089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:30.427 pt1 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:30.427 malloc2 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:30.427 [2024-11-05 15:57:02.621594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:30.427 [2024-11-05 15:57:02.621632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:30.427 [2024-11-05 15:57:02.621648] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:30.427 [2024-11-05 15:57:02.621654] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:30.427 [2024-11-05 15:57:02.623349] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:30.427 [2024-11-05 15:57:02.623375] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:30.427 pt2 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:30.427 malloc3 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:30.427 [2024-11-05 15:57:02.666615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:30.427 [2024-11-05 15:57:02.666655] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:30.427 [2024-11-05 15:57:02.666672] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:27:30.427 [2024-11-05 15:57:02.666679] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:30.427 [2024-11-05 15:57:02.668308] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:30.427 [2024-11-05 15:57:02.668336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:30.427 pt3 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:30.427 malloc4 00:27:30.427 15:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.428 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:27:30.428 15:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.428 15:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:30.428 [2024-11-05 15:57:02.698431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:27:30.428 [2024-11-05 15:57:02.698473] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:30.428 [2024-11-05 15:57:02.698487] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:27:30.428 [2024-11-05 15:57:02.698494] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:30.428 [2024-11-05 15:57:02.700242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:30.428 [2024-11-05 15:57:02.700272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:27:30.428 pt4 00:27:30.428 15:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.428 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:30.428 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:30.428 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:27:30.428 15:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.428 15:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:30.428 [2024-11-05 15:57:02.706472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:30.428 [2024-11-05 15:57:02.707990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:30.428 [2024-11-05 15:57:02.708045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:30.428 [2024-11-05 15:57:02.708095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:27:30.428 [2024-11-05 15:57:02.708245] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:27:30.428 [2024-11-05 15:57:02.708257] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:30.428 [2024-11-05 15:57:02.708478] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:30.428 [2024-11-05 15:57:02.708604] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:27:30.428 [2024-11-05 15:57:02.708616] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:27:30.428 [2024-11-05 15:57:02.708734] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:30.428 15:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.428 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:27:30.428 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:30.428 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:30.428 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:30.428 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:30.428 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:30.428 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:30.428 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:30.428 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:30.428 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:30.428 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:30.428 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:30.428 15:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.428 15:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:30.428 15:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.428 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:30.428 "name": "raid_bdev1", 00:27:30.428 "uuid": "179a7c0e-50a8-45cc-93d0-46f8b259a016", 00:27:30.428 "strip_size_kb": 64, 00:27:30.428 "state": "online", 00:27:30.428 "raid_level": "raid0", 00:27:30.428 "superblock": true, 00:27:30.428 "num_base_bdevs": 4, 00:27:30.428 "num_base_bdevs_discovered": 4, 00:27:30.428 "num_base_bdevs_operational": 4, 00:27:30.428 "base_bdevs_list": [ 00:27:30.428 { 00:27:30.428 "name": "pt1", 00:27:30.428 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:30.428 "is_configured": true, 00:27:30.428 "data_offset": 2048, 00:27:30.428 "data_size": 63488 00:27:30.428 }, 00:27:30.428 { 00:27:30.428 "name": "pt2", 00:27:30.428 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:30.428 "is_configured": true, 00:27:30.428 "data_offset": 2048, 00:27:30.428 "data_size": 63488 00:27:30.428 }, 00:27:30.428 { 00:27:30.428 "name": "pt3", 00:27:30.428 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:30.428 "is_configured": true, 00:27:30.428 "data_offset": 2048, 00:27:30.428 "data_size": 63488 00:27:30.428 }, 00:27:30.428 { 00:27:30.428 "name": "pt4", 00:27:30.428 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:30.428 "is_configured": true, 00:27:30.428 "data_offset": 2048, 00:27:30.428 "data_size": 63488 00:27:30.428 } 00:27:30.428 ] 00:27:30.428 }' 00:27:30.428 15:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:30.428 15:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:30.687 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:27:30.687 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:27:30.687 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:30.687 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:30.687 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:27:30.687 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:30.687 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:30.687 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:30.687 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.687 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:30.687 [2024-11-05 15:57:03.030788] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:30.687 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.687 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:30.687 "name": "raid_bdev1", 00:27:30.687 "aliases": [ 00:27:30.687 "179a7c0e-50a8-45cc-93d0-46f8b259a016" 00:27:30.687 ], 00:27:30.687 "product_name": "Raid Volume", 00:27:30.687 "block_size": 512, 00:27:30.687 "num_blocks": 253952, 00:27:30.687 "uuid": "179a7c0e-50a8-45cc-93d0-46f8b259a016", 00:27:30.687 "assigned_rate_limits": { 00:27:30.687 "rw_ios_per_sec": 0, 00:27:30.687 "rw_mbytes_per_sec": 0, 00:27:30.687 "r_mbytes_per_sec": 0, 00:27:30.687 "w_mbytes_per_sec": 0 00:27:30.687 }, 00:27:30.687 "claimed": false, 00:27:30.687 "zoned": false, 00:27:30.687 "supported_io_types": { 00:27:30.687 "read": true, 00:27:30.687 "write": true, 00:27:30.687 "unmap": true, 00:27:30.687 "flush": true, 00:27:30.687 "reset": true, 00:27:30.687 "nvme_admin": false, 00:27:30.687 "nvme_io": false, 00:27:30.687 "nvme_io_md": false, 00:27:30.687 "write_zeroes": true, 00:27:30.687 "zcopy": false, 00:27:30.687 "get_zone_info": false, 00:27:30.687 "zone_management": false, 00:27:30.687 "zone_append": false, 00:27:30.687 "compare": false, 00:27:30.687 "compare_and_write": false, 00:27:30.687 "abort": false, 00:27:30.687 "seek_hole": false, 00:27:30.687 "seek_data": false, 00:27:30.687 "copy": false, 00:27:30.687 "nvme_iov_md": false 00:27:30.687 }, 00:27:30.687 "memory_domains": [ 00:27:30.687 { 00:27:30.687 "dma_device_id": "system", 00:27:30.687 "dma_device_type": 1 00:27:30.687 }, 00:27:30.687 { 00:27:30.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:30.687 "dma_device_type": 2 00:27:30.687 }, 00:27:30.687 { 00:27:30.687 "dma_device_id": "system", 00:27:30.687 "dma_device_type": 1 00:27:30.687 }, 00:27:30.687 { 00:27:30.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:30.687 "dma_device_type": 2 00:27:30.687 }, 00:27:30.687 { 00:27:30.687 "dma_device_id": "system", 00:27:30.687 "dma_device_type": 1 00:27:30.687 }, 00:27:30.687 { 00:27:30.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:30.687 "dma_device_type": 2 00:27:30.687 }, 00:27:30.687 { 00:27:30.687 "dma_device_id": "system", 00:27:30.687 "dma_device_type": 1 00:27:30.687 }, 00:27:30.687 { 00:27:30.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:30.687 "dma_device_type": 2 00:27:30.687 } 00:27:30.687 ], 00:27:30.687 "driver_specific": { 00:27:30.687 "raid": { 00:27:30.687 "uuid": "179a7c0e-50a8-45cc-93d0-46f8b259a016", 00:27:30.687 "strip_size_kb": 64, 00:27:30.687 "state": "online", 00:27:30.687 "raid_level": "raid0", 00:27:30.687 "superblock": true, 00:27:30.687 "num_base_bdevs": 4, 00:27:30.687 "num_base_bdevs_discovered": 4, 00:27:30.687 "num_base_bdevs_operational": 4, 00:27:30.687 "base_bdevs_list": [ 00:27:30.687 { 00:27:30.687 "name": "pt1", 00:27:30.687 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:30.687 "is_configured": true, 00:27:30.687 "data_offset": 2048, 00:27:30.687 "data_size": 63488 00:27:30.687 }, 00:27:30.687 { 00:27:30.687 "name": "pt2", 00:27:30.687 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:30.687 "is_configured": true, 00:27:30.687 "data_offset": 2048, 00:27:30.687 "data_size": 63488 00:27:30.687 }, 00:27:30.687 { 00:27:30.687 "name": "pt3", 00:27:30.687 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:30.687 "is_configured": true, 00:27:30.687 "data_offset": 2048, 00:27:30.687 "data_size": 63488 00:27:30.687 }, 00:27:30.687 { 00:27:30.687 "name": "pt4", 00:27:30.687 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:30.687 "is_configured": true, 00:27:30.687 "data_offset": 2048, 00:27:30.687 "data_size": 63488 00:27:30.687 } 00:27:30.687 ] 00:27:30.687 } 00:27:30.687 } 00:27:30.687 }' 00:27:30.687 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:30.687 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:27:30.687 pt2 00:27:30.687 pt3 00:27:30.687 pt4' 00:27:30.687 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:27:30.946 [2024-11-05 15:57:03.250804] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.946 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=179a7c0e-50a8-45cc-93d0-46f8b259a016 00:27:30.947 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 179a7c0e-50a8-45cc-93d0-46f8b259a016 ']' 00:27:30.947 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:30.947 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.947 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:30.947 [2024-11-05 15:57:03.282540] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:30.947 [2024-11-05 15:57:03.282566] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:30.947 [2024-11-05 15:57:03.282630] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:30.947 [2024-11-05 15:57:03.282689] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:30.947 [2024-11-05 15:57:03.282701] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:27:30.947 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.947 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:30.947 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:27:30.947 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.947 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:30.947 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.947 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:27:30.947 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:27:30.947 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:30.947 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:27:30.947 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.947 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:30.947 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.947 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:30.947 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:27:30.947 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.947 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:30.947 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.947 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:30.947 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:27:30.947 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.947 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:30.947 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.947 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:30.947 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:27:30.947 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.947 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:30.947 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.947 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:27:30.947 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:30.947 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.947 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:31.206 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.206 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:27:31.206 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:27:31.206 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:27:31.206 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:27:31.206 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:31.206 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:31.206 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:31.206 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:31.206 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:27:31.206 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.206 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:31.206 [2024-11-05 15:57:03.394567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:27:31.206 [2024-11-05 15:57:03.396111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:27:31.206 [2024-11-05 15:57:03.396155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:27:31.206 [2024-11-05 15:57:03.396182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:27:31.206 [2024-11-05 15:57:03.396220] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:27:31.206 [2024-11-05 15:57:03.396255] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:27:31.206 [2024-11-05 15:57:03.396269] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:27:31.206 [2024-11-05 15:57:03.396284] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:27:31.206 [2024-11-05 15:57:03.396294] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:31.206 [2024-11-05 15:57:03.396304] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:27:31.206 request: 00:27:31.206 { 00:27:31.206 "name": "raid_bdev1", 00:27:31.206 "raid_level": "raid0", 00:27:31.206 "base_bdevs": [ 00:27:31.206 "malloc1", 00:27:31.206 "malloc2", 00:27:31.206 "malloc3", 00:27:31.206 "malloc4" 00:27:31.206 ], 00:27:31.206 "strip_size_kb": 64, 00:27:31.206 "superblock": false, 00:27:31.206 "method": "bdev_raid_create", 00:27:31.206 "req_id": 1 00:27:31.206 } 00:27:31.206 Got JSON-RPC error response 00:27:31.206 response: 00:27:31.206 { 00:27:31.206 "code": -17, 00:27:31.206 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:27:31.206 } 00:27:31.206 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:31.206 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:27:31.206 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:31.206 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:31.207 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:31.207 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:31.207 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:27:31.207 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.207 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:31.207 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.207 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:27:31.207 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:27:31.207 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:31.207 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.207 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:31.207 [2024-11-05 15:57:03.434560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:31.207 [2024-11-05 15:57:03.434602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:31.207 [2024-11-05 15:57:03.434617] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:27:31.207 [2024-11-05 15:57:03.434626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:31.207 [2024-11-05 15:57:03.436383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:31.207 [2024-11-05 15:57:03.436416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:31.207 [2024-11-05 15:57:03.436474] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:31.207 [2024-11-05 15:57:03.436519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:31.207 pt1 00:27:31.207 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.207 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:27:31.207 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:31.207 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:31.207 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:31.207 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:31.207 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:31.207 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:31.207 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:31.207 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:31.207 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:31.207 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:31.207 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.207 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:31.207 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:31.207 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.207 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:31.207 "name": "raid_bdev1", 00:27:31.207 "uuid": "179a7c0e-50a8-45cc-93d0-46f8b259a016", 00:27:31.207 "strip_size_kb": 64, 00:27:31.207 "state": "configuring", 00:27:31.207 "raid_level": "raid0", 00:27:31.207 "superblock": true, 00:27:31.207 "num_base_bdevs": 4, 00:27:31.207 "num_base_bdevs_discovered": 1, 00:27:31.207 "num_base_bdevs_operational": 4, 00:27:31.207 "base_bdevs_list": [ 00:27:31.207 { 00:27:31.207 "name": "pt1", 00:27:31.207 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:31.207 "is_configured": true, 00:27:31.207 "data_offset": 2048, 00:27:31.207 "data_size": 63488 00:27:31.207 }, 00:27:31.207 { 00:27:31.207 "name": null, 00:27:31.207 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:31.207 "is_configured": false, 00:27:31.207 "data_offset": 2048, 00:27:31.207 "data_size": 63488 00:27:31.207 }, 00:27:31.207 { 00:27:31.207 "name": null, 00:27:31.207 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:31.207 "is_configured": false, 00:27:31.207 "data_offset": 2048, 00:27:31.207 "data_size": 63488 00:27:31.207 }, 00:27:31.207 { 00:27:31.207 "name": null, 00:27:31.207 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:31.207 "is_configured": false, 00:27:31.207 "data_offset": 2048, 00:27:31.207 "data_size": 63488 00:27:31.207 } 00:27:31.207 ] 00:27:31.207 }' 00:27:31.207 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:31.207 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:31.465 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:27:31.465 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:31.465 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.465 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:31.465 [2024-11-05 15:57:03.774646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:31.465 [2024-11-05 15:57:03.774703] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:31.465 [2024-11-05 15:57:03.774720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:27:31.465 [2024-11-05 15:57:03.774729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:31.465 [2024-11-05 15:57:03.775088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:31.465 [2024-11-05 15:57:03.775102] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:31.465 [2024-11-05 15:57:03.775164] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:31.465 [2024-11-05 15:57:03.775181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:31.465 pt2 00:27:31.465 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.465 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:27:31.465 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.465 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:31.465 [2024-11-05 15:57:03.782639] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:27:31.465 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.466 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:27:31.466 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:31.466 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:31.466 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:31.466 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:31.466 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:31.466 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:31.466 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:31.466 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:31.466 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:31.466 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:31.466 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.466 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:31.466 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:31.466 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.466 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:31.466 "name": "raid_bdev1", 00:27:31.466 "uuid": "179a7c0e-50a8-45cc-93d0-46f8b259a016", 00:27:31.466 "strip_size_kb": 64, 00:27:31.466 "state": "configuring", 00:27:31.466 "raid_level": "raid0", 00:27:31.466 "superblock": true, 00:27:31.466 "num_base_bdevs": 4, 00:27:31.466 "num_base_bdevs_discovered": 1, 00:27:31.466 "num_base_bdevs_operational": 4, 00:27:31.466 "base_bdevs_list": [ 00:27:31.466 { 00:27:31.466 "name": "pt1", 00:27:31.466 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:31.466 "is_configured": true, 00:27:31.466 "data_offset": 2048, 00:27:31.466 "data_size": 63488 00:27:31.466 }, 00:27:31.466 { 00:27:31.466 "name": null, 00:27:31.466 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:31.466 "is_configured": false, 00:27:31.466 "data_offset": 0, 00:27:31.466 "data_size": 63488 00:27:31.466 }, 00:27:31.466 { 00:27:31.466 "name": null, 00:27:31.466 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:31.466 "is_configured": false, 00:27:31.466 "data_offset": 2048, 00:27:31.466 "data_size": 63488 00:27:31.466 }, 00:27:31.466 { 00:27:31.466 "name": null, 00:27:31.466 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:31.466 "is_configured": false, 00:27:31.466 "data_offset": 2048, 00:27:31.466 "data_size": 63488 00:27:31.466 } 00:27:31.466 ] 00:27:31.466 }' 00:27:31.466 15:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:31.466 15:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:31.724 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:27:31.724 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:31.724 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:31.724 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.724 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:31.724 [2024-11-05 15:57:04.110710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:31.724 [2024-11-05 15:57:04.110758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:31.724 [2024-11-05 15:57:04.110773] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:27:31.724 [2024-11-05 15:57:04.110780] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:31.724 [2024-11-05 15:57:04.111128] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:31.724 [2024-11-05 15:57:04.111144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:31.724 [2024-11-05 15:57:04.111206] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:31.724 [2024-11-05 15:57:04.111221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:31.724 pt2 00:27:31.724 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.724 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:27:31.724 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:31.724 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:31.724 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.724 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:31.724 [2024-11-05 15:57:04.118690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:31.724 [2024-11-05 15:57:04.118727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:31.724 [2024-11-05 15:57:04.118744] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:27:31.724 [2024-11-05 15:57:04.118750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:31.724 [2024-11-05 15:57:04.119051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:31.724 [2024-11-05 15:57:04.119065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:31.724 [2024-11-05 15:57:04.119113] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:27:31.724 [2024-11-05 15:57:04.119126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:31.724 pt3 00:27:31.724 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.724 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:27:31.724 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:31.724 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:27:31.724 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.724 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:31.724 [2024-11-05 15:57:04.126676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:27:31.724 [2024-11-05 15:57:04.126712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:31.724 [2024-11-05 15:57:04.126725] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:27:31.724 [2024-11-05 15:57:04.126731] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:31.724 [2024-11-05 15:57:04.127042] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:31.724 [2024-11-05 15:57:04.127057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:27:31.724 [2024-11-05 15:57:04.127102] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:27:31.724 [2024-11-05 15:57:04.127118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:27:31.724 [2024-11-05 15:57:04.127222] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:31.724 [2024-11-05 15:57:04.127229] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:31.724 [2024-11-05 15:57:04.127411] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:27:31.724 [2024-11-05 15:57:04.127521] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:31.724 [2024-11-05 15:57:04.127530] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:27:31.724 [2024-11-05 15:57:04.127624] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:31.724 pt4 00:27:31.724 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.724 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:27:31.724 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:31.724 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:27:31.724 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:31.724 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:31.724 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:31.724 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:31.724 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:31.724 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:31.724 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:31.724 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:31.724 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:31.724 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:31.724 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:31.724 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.724 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:31.983 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.983 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:31.983 "name": "raid_bdev1", 00:27:31.983 "uuid": "179a7c0e-50a8-45cc-93d0-46f8b259a016", 00:27:31.983 "strip_size_kb": 64, 00:27:31.983 "state": "online", 00:27:31.983 "raid_level": "raid0", 00:27:31.983 "superblock": true, 00:27:31.983 "num_base_bdevs": 4, 00:27:31.983 "num_base_bdevs_discovered": 4, 00:27:31.983 "num_base_bdevs_operational": 4, 00:27:31.983 "base_bdevs_list": [ 00:27:31.983 { 00:27:31.983 "name": "pt1", 00:27:31.983 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:31.983 "is_configured": true, 00:27:31.983 "data_offset": 2048, 00:27:31.983 "data_size": 63488 00:27:31.983 }, 00:27:31.983 { 00:27:31.983 "name": "pt2", 00:27:31.983 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:31.983 "is_configured": true, 00:27:31.983 "data_offset": 2048, 00:27:31.983 "data_size": 63488 00:27:31.983 }, 00:27:31.983 { 00:27:31.983 "name": "pt3", 00:27:31.983 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:31.983 "is_configured": true, 00:27:31.983 "data_offset": 2048, 00:27:31.983 "data_size": 63488 00:27:31.983 }, 00:27:31.983 { 00:27:31.983 "name": "pt4", 00:27:31.983 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:31.983 "is_configured": true, 00:27:31.983 "data_offset": 2048, 00:27:31.983 "data_size": 63488 00:27:31.983 } 00:27:31.983 ] 00:27:31.983 }' 00:27:31.983 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:31.983 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:32.241 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:27:32.241 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:27:32.241 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:32.241 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:32.241 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:27:32.241 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:32.241 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:32.241 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.241 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:32.241 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:32.241 [2024-11-05 15:57:04.435051] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:32.241 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.241 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:32.241 "name": "raid_bdev1", 00:27:32.241 "aliases": [ 00:27:32.242 "179a7c0e-50a8-45cc-93d0-46f8b259a016" 00:27:32.242 ], 00:27:32.242 "product_name": "Raid Volume", 00:27:32.242 "block_size": 512, 00:27:32.242 "num_blocks": 253952, 00:27:32.242 "uuid": "179a7c0e-50a8-45cc-93d0-46f8b259a016", 00:27:32.242 "assigned_rate_limits": { 00:27:32.242 "rw_ios_per_sec": 0, 00:27:32.242 "rw_mbytes_per_sec": 0, 00:27:32.242 "r_mbytes_per_sec": 0, 00:27:32.242 "w_mbytes_per_sec": 0 00:27:32.242 }, 00:27:32.242 "claimed": false, 00:27:32.242 "zoned": false, 00:27:32.242 "supported_io_types": { 00:27:32.242 "read": true, 00:27:32.242 "write": true, 00:27:32.242 "unmap": true, 00:27:32.242 "flush": true, 00:27:32.242 "reset": true, 00:27:32.242 "nvme_admin": false, 00:27:32.242 "nvme_io": false, 00:27:32.242 "nvme_io_md": false, 00:27:32.242 "write_zeroes": true, 00:27:32.242 "zcopy": false, 00:27:32.242 "get_zone_info": false, 00:27:32.242 "zone_management": false, 00:27:32.242 "zone_append": false, 00:27:32.242 "compare": false, 00:27:32.242 "compare_and_write": false, 00:27:32.242 "abort": false, 00:27:32.242 "seek_hole": false, 00:27:32.242 "seek_data": false, 00:27:32.242 "copy": false, 00:27:32.242 "nvme_iov_md": false 00:27:32.242 }, 00:27:32.242 "memory_domains": [ 00:27:32.242 { 00:27:32.242 "dma_device_id": "system", 00:27:32.242 "dma_device_type": 1 00:27:32.242 }, 00:27:32.242 { 00:27:32.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:32.242 "dma_device_type": 2 00:27:32.242 }, 00:27:32.242 { 00:27:32.242 "dma_device_id": "system", 00:27:32.242 "dma_device_type": 1 00:27:32.242 }, 00:27:32.242 { 00:27:32.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:32.242 "dma_device_type": 2 00:27:32.242 }, 00:27:32.242 { 00:27:32.242 "dma_device_id": "system", 00:27:32.242 "dma_device_type": 1 00:27:32.242 }, 00:27:32.242 { 00:27:32.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:32.242 "dma_device_type": 2 00:27:32.242 }, 00:27:32.242 { 00:27:32.242 "dma_device_id": "system", 00:27:32.242 "dma_device_type": 1 00:27:32.242 }, 00:27:32.242 { 00:27:32.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:32.242 "dma_device_type": 2 00:27:32.242 } 00:27:32.242 ], 00:27:32.242 "driver_specific": { 00:27:32.242 "raid": { 00:27:32.242 "uuid": "179a7c0e-50a8-45cc-93d0-46f8b259a016", 00:27:32.242 "strip_size_kb": 64, 00:27:32.242 "state": "online", 00:27:32.242 "raid_level": "raid0", 00:27:32.242 "superblock": true, 00:27:32.242 "num_base_bdevs": 4, 00:27:32.242 "num_base_bdevs_discovered": 4, 00:27:32.242 "num_base_bdevs_operational": 4, 00:27:32.242 "base_bdevs_list": [ 00:27:32.242 { 00:27:32.242 "name": "pt1", 00:27:32.242 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:32.242 "is_configured": true, 00:27:32.242 "data_offset": 2048, 00:27:32.242 "data_size": 63488 00:27:32.242 }, 00:27:32.242 { 00:27:32.242 "name": "pt2", 00:27:32.242 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:32.242 "is_configured": true, 00:27:32.242 "data_offset": 2048, 00:27:32.242 "data_size": 63488 00:27:32.242 }, 00:27:32.242 { 00:27:32.242 "name": "pt3", 00:27:32.242 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:32.242 "is_configured": true, 00:27:32.242 "data_offset": 2048, 00:27:32.242 "data_size": 63488 00:27:32.242 }, 00:27:32.242 { 00:27:32.242 "name": "pt4", 00:27:32.242 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:32.242 "is_configured": true, 00:27:32.242 "data_offset": 2048, 00:27:32.242 "data_size": 63488 00:27:32.242 } 00:27:32.242 ] 00:27:32.242 } 00:27:32.242 } 00:27:32.242 }' 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:27:32.242 pt2 00:27:32.242 pt3 00:27:32.242 pt4' 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:32.242 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:27:32.242 [2024-11-05 15:57:04.651042] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:32.499 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.500 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 179a7c0e-50a8-45cc-93d0-46f8b259a016 '!=' 179a7c0e-50a8-45cc-93d0-46f8b259a016 ']' 00:27:32.500 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:27:32.500 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:32.500 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:27:32.500 15:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68739 00:27:32.500 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 68739 ']' 00:27:32.500 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 68739 00:27:32.500 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:27:32.500 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:32.500 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68739 00:27:32.500 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:32.500 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:32.500 killing process with pid 68739 00:27:32.500 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68739' 00:27:32.500 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 68739 00:27:32.500 [2024-11-05 15:57:04.699503] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:32.500 [2024-11-05 15:57:04.699559] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:32.500 15:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 68739 00:27:32.500 [2024-11-05 15:57:04.699617] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:32.500 [2024-11-05 15:57:04.699624] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:27:32.500 [2024-11-05 15:57:04.891757] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:33.064 15:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:27:33.064 ************************************ 00:27:33.064 END TEST raid_superblock_test 00:27:33.064 00:27:33.064 real 0m3.759s 00:27:33.064 user 0m5.495s 00:27:33.064 sys 0m0.586s 00:27:33.064 15:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:33.064 15:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:33.064 ************************************ 00:27:33.321 15:57:05 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:27:33.321 15:57:05 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:27:33.321 15:57:05 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:33.321 15:57:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:33.321 ************************************ 00:27:33.321 START TEST raid_read_error_test 00:27:33.321 ************************************ 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 read 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.nhV9AGXOiX 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68986 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68986 00:27:33.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 68986 ']' 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:33.321 15:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:33.321 [2024-11-05 15:57:05.582782] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:27:33.321 [2024-11-05 15:57:05.582923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68986 ] 00:27:33.578 [2024-11-05 15:57:05.741993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.578 [2024-11-05 15:57:05.843086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.578 [2024-11-05 15:57:05.979292] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:33.578 [2024-11-05 15:57:05.979354] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:34.143 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:34.143 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:27:34.143 15:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:34.143 15:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:34.143 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.143 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.143 BaseBdev1_malloc 00:27:34.143 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.143 15:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:27:34.143 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.143 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.143 true 00:27:34.143 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.143 15:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:27:34.143 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.143 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.143 [2024-11-05 15:57:06.462890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:27:34.143 [2024-11-05 15:57:06.462940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:34.143 [2024-11-05 15:57:06.462959] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:27:34.143 [2024-11-05 15:57:06.462969] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:34.143 [2024-11-05 15:57:06.465073] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:34.143 [2024-11-05 15:57:06.465109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:34.143 BaseBdev1 00:27:34.143 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.143 15:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:34.143 15:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:34.143 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.143 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.143 BaseBdev2_malloc 00:27:34.143 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.143 15:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:27:34.143 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.143 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.143 true 00:27:34.143 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.143 15:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:27:34.143 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.143 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.143 [2024-11-05 15:57:06.506461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:27:34.143 [2024-11-05 15:57:06.506510] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:34.143 [2024-11-05 15:57:06.506526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:27:34.143 [2024-11-05 15:57:06.506536] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:34.143 [2024-11-05 15:57:06.508595] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:34.143 [2024-11-05 15:57:06.508631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:34.143 BaseBdev2 00:27:34.143 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.143 15:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:34.143 15:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:34.143 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.143 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.143 BaseBdev3_malloc 00:27:34.143 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.143 15:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:27:34.143 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.143 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.401 true 00:27:34.401 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.401 15:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:27:34.401 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.401 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.401 [2024-11-05 15:57:06.565715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:27:34.401 [2024-11-05 15:57:06.565767] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:34.401 [2024-11-05 15:57:06.565784] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:27:34.401 [2024-11-05 15:57:06.565795] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:34.401 [2024-11-05 15:57:06.567961] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:34.401 [2024-11-05 15:57:06.567996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:34.401 BaseBdev3 00:27:34.401 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.401 15:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:34.401 15:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:27:34.401 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.401 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.401 BaseBdev4_malloc 00:27:34.401 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.401 15:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:27:34.401 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.401 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.401 true 00:27:34.401 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.401 15:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:27:34.401 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.401 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.401 [2024-11-05 15:57:06.609225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:27:34.401 [2024-11-05 15:57:06.609270] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:34.401 [2024-11-05 15:57:06.609286] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:27:34.401 [2024-11-05 15:57:06.609297] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:34.401 [2024-11-05 15:57:06.611353] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:34.401 [2024-11-05 15:57:06.611389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:27:34.401 BaseBdev4 00:27:34.401 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.401 15:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:27:34.401 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.401 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.401 [2024-11-05 15:57:06.617287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:34.401 [2024-11-05 15:57:06.619089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:34.401 [2024-11-05 15:57:06.619163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:34.401 [2024-11-05 15:57:06.619228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:34.401 [2024-11-05 15:57:06.619450] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:27:34.401 [2024-11-05 15:57:06.619470] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:34.401 [2024-11-05 15:57:06.619702] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:27:34.401 [2024-11-05 15:57:06.619856] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:27:34.401 [2024-11-05 15:57:06.619872] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:27:34.401 [2024-11-05 15:57:06.620007] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:34.401 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.401 15:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:27:34.401 15:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:34.401 15:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:34.401 15:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:34.401 15:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:34.401 15:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:34.401 15:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:34.401 15:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:34.401 15:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:34.402 15:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:34.402 15:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:34.402 15:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:34.402 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.402 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.402 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.402 15:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:34.402 "name": "raid_bdev1", 00:27:34.402 "uuid": "8400c475-6862-4988-9287-ff71063a67e1", 00:27:34.402 "strip_size_kb": 64, 00:27:34.402 "state": "online", 00:27:34.402 "raid_level": "raid0", 00:27:34.402 "superblock": true, 00:27:34.402 "num_base_bdevs": 4, 00:27:34.402 "num_base_bdevs_discovered": 4, 00:27:34.402 "num_base_bdevs_operational": 4, 00:27:34.402 "base_bdevs_list": [ 00:27:34.402 { 00:27:34.402 "name": "BaseBdev1", 00:27:34.402 "uuid": "8892e0c5-15a0-5518-a1d7-fca09af7d08f", 00:27:34.402 "is_configured": true, 00:27:34.402 "data_offset": 2048, 00:27:34.402 "data_size": 63488 00:27:34.402 }, 00:27:34.402 { 00:27:34.402 "name": "BaseBdev2", 00:27:34.402 "uuid": "436b7818-6cfe-5717-8797-d8f79942c79f", 00:27:34.402 "is_configured": true, 00:27:34.402 "data_offset": 2048, 00:27:34.402 "data_size": 63488 00:27:34.402 }, 00:27:34.402 { 00:27:34.402 "name": "BaseBdev3", 00:27:34.402 "uuid": "7d548e15-a3cf-583e-93ec-0df205efba40", 00:27:34.402 "is_configured": true, 00:27:34.402 "data_offset": 2048, 00:27:34.402 "data_size": 63488 00:27:34.402 }, 00:27:34.402 { 00:27:34.402 "name": "BaseBdev4", 00:27:34.402 "uuid": "e013a62b-d4f1-5514-9f67-81c2eb0a74e1", 00:27:34.402 "is_configured": true, 00:27:34.402 "data_offset": 2048, 00:27:34.402 "data_size": 63488 00:27:34.402 } 00:27:34.402 ] 00:27:34.402 }' 00:27:34.402 15:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:34.402 15:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.659 15:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:27:34.659 15:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:27:34.659 [2024-11-05 15:57:07.018287] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:27:35.591 15:57:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:27:35.591 15:57:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.591 15:57:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:35.591 15:57:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.591 15:57:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:27:35.591 15:57:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:27:35.591 15:57:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:27:35.591 15:57:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:27:35.591 15:57:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:35.591 15:57:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:35.591 15:57:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:35.591 15:57:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:35.591 15:57:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:35.591 15:57:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:35.591 15:57:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:35.591 15:57:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:35.591 15:57:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:35.591 15:57:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:35.591 15:57:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:35.591 15:57:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.591 15:57:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:35.591 15:57:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.591 15:57:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:35.591 "name": "raid_bdev1", 00:27:35.591 "uuid": "8400c475-6862-4988-9287-ff71063a67e1", 00:27:35.591 "strip_size_kb": 64, 00:27:35.591 "state": "online", 00:27:35.591 "raid_level": "raid0", 00:27:35.591 "superblock": true, 00:27:35.591 "num_base_bdevs": 4, 00:27:35.591 "num_base_bdevs_discovered": 4, 00:27:35.591 "num_base_bdevs_operational": 4, 00:27:35.591 "base_bdevs_list": [ 00:27:35.591 { 00:27:35.591 "name": "BaseBdev1", 00:27:35.591 "uuid": "8892e0c5-15a0-5518-a1d7-fca09af7d08f", 00:27:35.591 "is_configured": true, 00:27:35.591 "data_offset": 2048, 00:27:35.591 "data_size": 63488 00:27:35.591 }, 00:27:35.591 { 00:27:35.591 "name": "BaseBdev2", 00:27:35.591 "uuid": "436b7818-6cfe-5717-8797-d8f79942c79f", 00:27:35.591 "is_configured": true, 00:27:35.591 "data_offset": 2048, 00:27:35.591 "data_size": 63488 00:27:35.591 }, 00:27:35.591 { 00:27:35.591 "name": "BaseBdev3", 00:27:35.592 "uuid": "7d548e15-a3cf-583e-93ec-0df205efba40", 00:27:35.592 "is_configured": true, 00:27:35.592 "data_offset": 2048, 00:27:35.592 "data_size": 63488 00:27:35.592 }, 00:27:35.592 { 00:27:35.592 "name": "BaseBdev4", 00:27:35.592 "uuid": "e013a62b-d4f1-5514-9f67-81c2eb0a74e1", 00:27:35.592 "is_configured": true, 00:27:35.592 "data_offset": 2048, 00:27:35.592 "data_size": 63488 00:27:35.592 } 00:27:35.592 ] 00:27:35.592 }' 00:27:35.592 15:57:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:35.592 15:57:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:36.156 15:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:36.156 15:57:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.156 15:57:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:36.156 [2024-11-05 15:57:08.272328] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:36.156 [2024-11-05 15:57:08.272361] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:36.156 [2024-11-05 15:57:08.275358] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:36.156 [2024-11-05 15:57:08.275415] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:36.156 [2024-11-05 15:57:08.275459] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:36.156 [2024-11-05 15:57:08.275475] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:27:36.156 15:57:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.156 15:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68986 00:27:36.156 { 00:27:36.156 "results": [ 00:27:36.156 { 00:27:36.156 "job": "raid_bdev1", 00:27:36.156 "core_mask": "0x1", 00:27:36.156 "workload": "randrw", 00:27:36.157 "percentage": 50, 00:27:36.157 "status": "finished", 00:27:36.157 "queue_depth": 1, 00:27:36.157 "io_size": 131072, 00:27:36.157 "runtime": 1.252184, 00:27:36.157 "iops": 14959.462826549452, 00:27:36.157 "mibps": 1869.9328533186815, 00:27:36.157 "io_failed": 1, 00:27:36.157 "io_timeout": 0, 00:27:36.157 "avg_latency_us": 91.36633156626111, 00:27:36.157 "min_latency_us": 33.47692307692308, 00:27:36.157 "max_latency_us": 1688.8123076923077 00:27:36.157 } 00:27:36.157 ], 00:27:36.157 "core_count": 1 00:27:36.157 } 00:27:36.157 15:57:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 68986 ']' 00:27:36.157 15:57:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 68986 00:27:36.157 15:57:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:27:36.157 15:57:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:36.157 15:57:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68986 00:27:36.157 15:57:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:36.157 15:57:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:36.157 killing process with pid 68986 00:27:36.157 15:57:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68986' 00:27:36.157 15:57:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 68986 00:27:36.157 [2024-11-05 15:57:08.300954] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:36.157 15:57:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 68986 00:27:36.157 [2024-11-05 15:57:08.501413] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:36.721 15:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.nhV9AGXOiX 00:27:36.721 15:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:27:36.721 15:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:27:36.721 15:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.80 00:27:36.721 15:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:27:36.721 15:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:36.721 15:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:27:36.721 15:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.80 != \0\.\0\0 ]] 00:27:36.721 00:27:36.721 real 0m3.612s 00:27:36.721 user 0m4.316s 00:27:36.721 sys 0m0.392s 00:27:36.721 15:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:36.721 15:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:36.721 ************************************ 00:27:36.721 END TEST raid_read_error_test 00:27:36.721 ************************************ 00:27:36.978 15:57:09 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:27:36.978 15:57:09 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:27:36.978 15:57:09 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:36.978 15:57:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:36.978 ************************************ 00:27:36.978 START TEST raid_write_error_test 00:27:36.978 ************************************ 00:27:36.978 15:57:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 write 00:27:36.978 15:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:27:36.978 15:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:27:36.978 15:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:27:36.978 15:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:27:36.978 15:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:36.978 15:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:27:36.978 15:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:36.978 15:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:36.978 15:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:27:36.978 15:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:36.978 15:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:36.978 15:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:27:36.978 15:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:36.978 15:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:36.978 15:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:27:36.978 15:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:36.978 15:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:36.978 15:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:36.978 15:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:27:36.978 15:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:27:36.978 15:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:27:36.978 15:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:27:36.978 15:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:27:36.978 15:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:27:36.979 15:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:27:36.979 15:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:27:36.979 15:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:27:36.979 15:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:27:36.979 15:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.le21okBHSA 00:27:36.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:36.979 15:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69122 00:27:36.979 15:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69122 00:27:36.979 15:57:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 69122 ']' 00:27:36.979 15:57:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:36.979 15:57:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:36.979 15:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:27:36.979 15:57:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:36.979 15:57:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:36.979 15:57:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:36.979 [2024-11-05 15:57:09.223689] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:27:36.979 [2024-11-05 15:57:09.223810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69122 ] 00:27:36.979 [2024-11-05 15:57:09.379885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.236 [2024-11-05 15:57:09.461255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.236 [2024-11-05 15:57:09.567869] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:37.236 [2024-11-05 15:57:09.567902] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:37.801 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:37.801 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:27:37.801 15:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:37.801 15:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:37.801 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.801 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:37.801 BaseBdev1_malloc 00:27:37.801 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.801 15:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:27:37.801 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.801 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:37.801 true 00:27:37.801 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.801 15:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:27:37.801 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.801 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:37.801 [2024-11-05 15:57:10.106079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:27:37.801 [2024-11-05 15:57:10.106217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:37.801 [2024-11-05 15:57:10.106237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:27:37.801 [2024-11-05 15:57:10.106246] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:37.801 [2024-11-05 15:57:10.107989] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:37.801 [2024-11-05 15:57:10.108018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:37.801 BaseBdev1 00:27:37.801 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.802 15:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:37.802 15:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:37.802 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.802 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:37.802 BaseBdev2_malloc 00:27:37.802 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.802 15:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:27:37.802 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.802 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:37.802 true 00:27:37.802 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.802 15:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:27:37.802 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.802 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:37.802 [2024-11-05 15:57:10.145400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:27:37.802 [2024-11-05 15:57:10.145521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:37.802 [2024-11-05 15:57:10.145539] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:27:37.802 [2024-11-05 15:57:10.145547] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:37.802 [2024-11-05 15:57:10.147279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:37.802 [2024-11-05 15:57:10.147309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:37.802 BaseBdev2 00:27:37.802 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.802 15:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:37.802 15:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:37.802 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.802 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:37.802 BaseBdev3_malloc 00:27:37.802 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.802 15:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:27:37.802 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.802 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:37.802 true 00:27:37.802 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.802 15:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:27:37.802 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.802 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:37.802 [2024-11-05 15:57:10.200220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:27:37.802 [2024-11-05 15:57:10.200259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:37.802 [2024-11-05 15:57:10.200272] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:27:37.802 [2024-11-05 15:57:10.200281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:37.802 [2024-11-05 15:57:10.201978] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:37.802 [2024-11-05 15:57:10.202083] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:37.802 BaseBdev3 00:27:37.802 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.802 15:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:37.802 15:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:27:37.802 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.802 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:38.058 BaseBdev4_malloc 00:27:38.058 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.058 15:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:27:38.058 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.058 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:38.058 true 00:27:38.058 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.058 15:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:27:38.058 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.058 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:38.058 [2024-11-05 15:57:10.239323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:27:38.058 [2024-11-05 15:57:10.239358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:38.058 [2024-11-05 15:57:10.239371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:27:38.058 [2024-11-05 15:57:10.239379] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:38.058 [2024-11-05 15:57:10.241056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:38.058 [2024-11-05 15:57:10.241159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:27:38.058 BaseBdev4 00:27:38.058 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.058 15:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:27:38.058 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.058 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:38.058 [2024-11-05 15:57:10.247374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:38.058 [2024-11-05 15:57:10.248867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:38.058 [2024-11-05 15:57:10.248927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:38.058 [2024-11-05 15:57:10.248979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:38.058 [2024-11-05 15:57:10.249149] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:27:38.058 [2024-11-05 15:57:10.249161] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:38.058 [2024-11-05 15:57:10.249350] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:27:38.058 [2024-11-05 15:57:10.249460] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:27:38.058 [2024-11-05 15:57:10.249468] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:27:38.058 [2024-11-05 15:57:10.249576] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:38.058 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.058 15:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:27:38.058 15:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:38.058 15:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:38.058 15:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:38.058 15:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:38.058 15:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:38.058 15:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:38.058 15:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:38.058 15:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:38.058 15:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:38.058 15:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:38.058 15:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:38.058 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.058 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:38.058 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.058 15:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:38.058 "name": "raid_bdev1", 00:27:38.058 "uuid": "63055e24-9b5c-46e5-9c33-1dcfd8d51f08", 00:27:38.058 "strip_size_kb": 64, 00:27:38.058 "state": "online", 00:27:38.058 "raid_level": "raid0", 00:27:38.058 "superblock": true, 00:27:38.058 "num_base_bdevs": 4, 00:27:38.058 "num_base_bdevs_discovered": 4, 00:27:38.058 "num_base_bdevs_operational": 4, 00:27:38.058 "base_bdevs_list": [ 00:27:38.058 { 00:27:38.058 "name": "BaseBdev1", 00:27:38.058 "uuid": "f1bdc43f-4714-5547-8e00-9c402be2c6af", 00:27:38.058 "is_configured": true, 00:27:38.058 "data_offset": 2048, 00:27:38.058 "data_size": 63488 00:27:38.058 }, 00:27:38.058 { 00:27:38.058 "name": "BaseBdev2", 00:27:38.058 "uuid": "83dd14a9-25f6-5f94-a51b-bdb5036f6c21", 00:27:38.058 "is_configured": true, 00:27:38.058 "data_offset": 2048, 00:27:38.058 "data_size": 63488 00:27:38.058 }, 00:27:38.058 { 00:27:38.058 "name": "BaseBdev3", 00:27:38.058 "uuid": "4dc2ceba-fcdb-55d2-bdad-f92b95fe700f", 00:27:38.058 "is_configured": true, 00:27:38.058 "data_offset": 2048, 00:27:38.058 "data_size": 63488 00:27:38.058 }, 00:27:38.058 { 00:27:38.058 "name": "BaseBdev4", 00:27:38.058 "uuid": "cccbca66-285f-5d54-959c-2489fbadadf0", 00:27:38.058 "is_configured": true, 00:27:38.058 "data_offset": 2048, 00:27:38.058 "data_size": 63488 00:27:38.058 } 00:27:38.058 ] 00:27:38.058 }' 00:27:38.058 15:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:38.058 15:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:38.315 15:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:27:38.315 15:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:27:38.315 [2024-11-05 15:57:10.636194] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:27:39.247 15:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:27:39.247 15:57:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.247 15:57:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:39.247 15:57:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.247 15:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:27:39.247 15:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:27:39.247 15:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:27:39.247 15:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:27:39.247 15:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:39.247 15:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:39.247 15:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:39.247 15:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:39.247 15:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:39.247 15:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:39.247 15:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:39.247 15:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:39.247 15:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:39.247 15:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:39.247 15:57:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.247 15:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:39.247 15:57:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:39.247 15:57:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.247 15:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:39.247 "name": "raid_bdev1", 00:27:39.247 "uuid": "63055e24-9b5c-46e5-9c33-1dcfd8d51f08", 00:27:39.247 "strip_size_kb": 64, 00:27:39.247 "state": "online", 00:27:39.247 "raid_level": "raid0", 00:27:39.247 "superblock": true, 00:27:39.247 "num_base_bdevs": 4, 00:27:39.247 "num_base_bdevs_discovered": 4, 00:27:39.247 "num_base_bdevs_operational": 4, 00:27:39.247 "base_bdevs_list": [ 00:27:39.247 { 00:27:39.247 "name": "BaseBdev1", 00:27:39.247 "uuid": "f1bdc43f-4714-5547-8e00-9c402be2c6af", 00:27:39.247 "is_configured": true, 00:27:39.247 "data_offset": 2048, 00:27:39.247 "data_size": 63488 00:27:39.247 }, 00:27:39.247 { 00:27:39.247 "name": "BaseBdev2", 00:27:39.247 "uuid": "83dd14a9-25f6-5f94-a51b-bdb5036f6c21", 00:27:39.247 "is_configured": true, 00:27:39.247 "data_offset": 2048, 00:27:39.247 "data_size": 63488 00:27:39.247 }, 00:27:39.247 { 00:27:39.247 "name": "BaseBdev3", 00:27:39.247 "uuid": "4dc2ceba-fcdb-55d2-bdad-f92b95fe700f", 00:27:39.247 "is_configured": true, 00:27:39.247 "data_offset": 2048, 00:27:39.247 "data_size": 63488 00:27:39.247 }, 00:27:39.247 { 00:27:39.247 "name": "BaseBdev4", 00:27:39.247 "uuid": "cccbca66-285f-5d54-959c-2489fbadadf0", 00:27:39.247 "is_configured": true, 00:27:39.247 "data_offset": 2048, 00:27:39.247 "data_size": 63488 00:27:39.247 } 00:27:39.247 ] 00:27:39.247 }' 00:27:39.247 15:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:39.247 15:57:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:39.505 15:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:39.505 15:57:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.505 15:57:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:39.505 [2024-11-05 15:57:11.879744] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:39.505 [2024-11-05 15:57:11.879874] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:39.505 [2024-11-05 15:57:11.882251] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:39.505 [2024-11-05 15:57:11.882300] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:39.505 [2024-11-05 15:57:11.882335] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:39.505 [2024-11-05 15:57:11.882345] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:27:39.505 { 00:27:39.505 "results": [ 00:27:39.505 { 00:27:39.505 "job": "raid_bdev1", 00:27:39.505 "core_mask": "0x1", 00:27:39.505 "workload": "randrw", 00:27:39.505 "percentage": 50, 00:27:39.505 "status": "finished", 00:27:39.505 "queue_depth": 1, 00:27:39.505 "io_size": 131072, 00:27:39.505 "runtime": 1.242204, 00:27:39.505 "iops": 18774.69401161162, 00:27:39.505 "mibps": 2346.8367514514525, 00:27:39.505 "io_failed": 1, 00:27:39.505 "io_timeout": 0, 00:27:39.505 "avg_latency_us": 73.05136494513505, 00:27:39.505 "min_latency_us": 25.6, 00:27:39.505 "max_latency_us": 1342.2276923076922 00:27:39.505 } 00:27:39.505 ], 00:27:39.505 "core_count": 1 00:27:39.505 } 00:27:39.505 15:57:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.505 15:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69122 00:27:39.505 15:57:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 69122 ']' 00:27:39.505 15:57:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 69122 00:27:39.505 15:57:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:27:39.505 15:57:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:39.505 15:57:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69122 00:27:39.505 15:57:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:39.505 15:57:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:39.505 killing process with pid 69122 00:27:39.505 15:57:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69122' 00:27:39.505 15:57:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 69122 00:27:39.505 [2024-11-05 15:57:11.907857] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:39.505 15:57:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 69122 00:27:39.763 [2024-11-05 15:57:12.066671] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:40.329 15:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:27:40.329 15:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.le21okBHSA 00:27:40.329 15:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:27:40.329 15:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.81 00:27:40.329 15:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:27:40.329 ************************************ 00:27:40.329 END TEST raid_write_error_test 00:27:40.329 ************************************ 00:27:40.329 15:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:40.329 15:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:27:40.329 15:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.81 != \0\.\0\0 ]] 00:27:40.329 00:27:40.329 real 0m3.505s 00:27:40.329 user 0m4.205s 00:27:40.329 sys 0m0.360s 00:27:40.329 15:57:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:40.329 15:57:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:40.329 15:57:12 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:27:40.329 15:57:12 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:27:40.329 15:57:12 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:27:40.329 15:57:12 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:40.329 15:57:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:40.329 ************************************ 00:27:40.329 START TEST raid_state_function_test 00:27:40.329 ************************************ 00:27:40.329 15:57:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 false 00:27:40.329 15:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:27:40.329 15:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:27:40.329 15:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:27:40.329 15:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:27:40.329 15:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:27:40.329 15:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:40.329 15:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:27:40.329 15:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:40.329 15:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:40.329 15:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:27:40.329 15:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:40.329 15:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:40.329 15:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:27:40.329 15:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:40.329 15:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:40.329 15:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:27:40.329 15:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:40.329 15:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:40.329 Process raid pid: 69249 00:27:40.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:40.329 15:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:40.329 15:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:27:40.330 15:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:27:40.330 15:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:27:40.330 15:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:27:40.330 15:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:27:40.330 15:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:27:40.330 15:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:27:40.330 15:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:27:40.330 15:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:27:40.330 15:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:27:40.330 15:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69249 00:27:40.330 15:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69249' 00:27:40.330 15:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69249 00:27:40.330 15:57:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 69249 ']' 00:27:40.330 15:57:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:40.330 15:57:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:40.330 15:57:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:40.330 15:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:27:40.330 15:57:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:40.330 15:57:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:40.586 [2024-11-05 15:57:12.762266] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:27:40.587 [2024-11-05 15:57:12.762377] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:40.587 [2024-11-05 15:57:12.919315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.844 [2024-11-05 15:57:13.017656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:40.844 [2024-11-05 15:57:13.153801] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:40.844 [2024-11-05 15:57:13.153976] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:41.408 15:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:41.408 15:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:27:41.408 15:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:41.408 15:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.408 15:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:41.408 [2024-11-05 15:57:13.566275] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:41.408 [2024-11-05 15:57:13.566322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:41.408 [2024-11-05 15:57:13.566332] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:41.408 [2024-11-05 15:57:13.566342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:41.408 [2024-11-05 15:57:13.566348] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:41.408 [2024-11-05 15:57:13.566357] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:41.408 [2024-11-05 15:57:13.566363] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:41.408 [2024-11-05 15:57:13.566371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:41.408 15:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.408 15:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:41.408 15:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:41.408 15:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:41.408 15:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:41.408 15:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:41.408 15:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:41.408 15:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:41.408 15:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:41.409 15:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:41.409 15:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:41.409 15:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:41.409 15:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.409 15:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:41.409 15:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:41.409 15:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.409 15:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:41.409 "name": "Existed_Raid", 00:27:41.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.409 "strip_size_kb": 64, 00:27:41.409 "state": "configuring", 00:27:41.409 "raid_level": "concat", 00:27:41.409 "superblock": false, 00:27:41.409 "num_base_bdevs": 4, 00:27:41.409 "num_base_bdevs_discovered": 0, 00:27:41.409 "num_base_bdevs_operational": 4, 00:27:41.409 "base_bdevs_list": [ 00:27:41.409 { 00:27:41.409 "name": "BaseBdev1", 00:27:41.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.409 "is_configured": false, 00:27:41.409 "data_offset": 0, 00:27:41.409 "data_size": 0 00:27:41.409 }, 00:27:41.409 { 00:27:41.409 "name": "BaseBdev2", 00:27:41.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.409 "is_configured": false, 00:27:41.409 "data_offset": 0, 00:27:41.409 "data_size": 0 00:27:41.409 }, 00:27:41.409 { 00:27:41.409 "name": "BaseBdev3", 00:27:41.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.409 "is_configured": false, 00:27:41.409 "data_offset": 0, 00:27:41.409 "data_size": 0 00:27:41.409 }, 00:27:41.409 { 00:27:41.409 "name": "BaseBdev4", 00:27:41.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.409 "is_configured": false, 00:27:41.409 "data_offset": 0, 00:27:41.409 "data_size": 0 00:27:41.409 } 00:27:41.409 ] 00:27:41.409 }' 00:27:41.409 15:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:41.409 15:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:41.667 [2024-11-05 15:57:13.882297] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:41.667 [2024-11-05 15:57:13.882329] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:41.667 [2024-11-05 15:57:13.890297] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:41.667 [2024-11-05 15:57:13.890332] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:41.667 [2024-11-05 15:57:13.890340] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:41.667 [2024-11-05 15:57:13.890350] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:41.667 [2024-11-05 15:57:13.890356] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:41.667 [2024-11-05 15:57:13.890365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:41.667 [2024-11-05 15:57:13.890371] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:41.667 [2024-11-05 15:57:13.890379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:41.667 [2024-11-05 15:57:13.918648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:41.667 BaseBdev1 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:41.667 [ 00:27:41.667 { 00:27:41.667 "name": "BaseBdev1", 00:27:41.667 "aliases": [ 00:27:41.667 "1fd961a9-2814-49cd-95c3-c8b873760a09" 00:27:41.667 ], 00:27:41.667 "product_name": "Malloc disk", 00:27:41.667 "block_size": 512, 00:27:41.667 "num_blocks": 65536, 00:27:41.667 "uuid": "1fd961a9-2814-49cd-95c3-c8b873760a09", 00:27:41.667 "assigned_rate_limits": { 00:27:41.667 "rw_ios_per_sec": 0, 00:27:41.667 "rw_mbytes_per_sec": 0, 00:27:41.667 "r_mbytes_per_sec": 0, 00:27:41.667 "w_mbytes_per_sec": 0 00:27:41.667 }, 00:27:41.667 "claimed": true, 00:27:41.667 "claim_type": "exclusive_write", 00:27:41.667 "zoned": false, 00:27:41.667 "supported_io_types": { 00:27:41.667 "read": true, 00:27:41.667 "write": true, 00:27:41.667 "unmap": true, 00:27:41.667 "flush": true, 00:27:41.667 "reset": true, 00:27:41.667 "nvme_admin": false, 00:27:41.667 "nvme_io": false, 00:27:41.667 "nvme_io_md": false, 00:27:41.667 "write_zeroes": true, 00:27:41.667 "zcopy": true, 00:27:41.667 "get_zone_info": false, 00:27:41.667 "zone_management": false, 00:27:41.667 "zone_append": false, 00:27:41.667 "compare": false, 00:27:41.667 "compare_and_write": false, 00:27:41.667 "abort": true, 00:27:41.667 "seek_hole": false, 00:27:41.667 "seek_data": false, 00:27:41.667 "copy": true, 00:27:41.667 "nvme_iov_md": false 00:27:41.667 }, 00:27:41.667 "memory_domains": [ 00:27:41.667 { 00:27:41.667 "dma_device_id": "system", 00:27:41.667 "dma_device_type": 1 00:27:41.667 }, 00:27:41.667 { 00:27:41.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:41.667 "dma_device_type": 2 00:27:41.667 } 00:27:41.667 ], 00:27:41.667 "driver_specific": {} 00:27:41.667 } 00:27:41.667 ] 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.667 15:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:41.667 "name": "Existed_Raid", 00:27:41.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.667 "strip_size_kb": 64, 00:27:41.667 "state": "configuring", 00:27:41.667 "raid_level": "concat", 00:27:41.667 "superblock": false, 00:27:41.667 "num_base_bdevs": 4, 00:27:41.668 "num_base_bdevs_discovered": 1, 00:27:41.668 "num_base_bdevs_operational": 4, 00:27:41.668 "base_bdevs_list": [ 00:27:41.668 { 00:27:41.668 "name": "BaseBdev1", 00:27:41.668 "uuid": "1fd961a9-2814-49cd-95c3-c8b873760a09", 00:27:41.668 "is_configured": true, 00:27:41.668 "data_offset": 0, 00:27:41.668 "data_size": 65536 00:27:41.668 }, 00:27:41.668 { 00:27:41.668 "name": "BaseBdev2", 00:27:41.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.668 "is_configured": false, 00:27:41.668 "data_offset": 0, 00:27:41.668 "data_size": 0 00:27:41.668 }, 00:27:41.668 { 00:27:41.668 "name": "BaseBdev3", 00:27:41.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.668 "is_configured": false, 00:27:41.668 "data_offset": 0, 00:27:41.668 "data_size": 0 00:27:41.668 }, 00:27:41.668 { 00:27:41.668 "name": "BaseBdev4", 00:27:41.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.668 "is_configured": false, 00:27:41.668 "data_offset": 0, 00:27:41.668 "data_size": 0 00:27:41.668 } 00:27:41.668 ] 00:27:41.668 }' 00:27:41.668 15:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:41.668 15:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:41.925 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:41.925 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.925 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:41.925 [2024-11-05 15:57:14.250750] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:41.925 [2024-11-05 15:57:14.250893] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:27:41.925 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.925 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:41.925 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.925 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:41.925 [2024-11-05 15:57:14.258789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:41.925 [2024-11-05 15:57:14.260386] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:41.925 [2024-11-05 15:57:14.260486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:41.925 [2024-11-05 15:57:14.260536] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:41.925 [2024-11-05 15:57:14.260558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:41.926 [2024-11-05 15:57:14.260662] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:41.926 [2024-11-05 15:57:14.260681] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:41.926 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.926 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:27:41.926 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:41.926 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:41.926 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:41.926 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:41.926 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:41.926 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:41.926 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:41.926 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:41.926 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:41.926 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:41.926 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:41.926 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:41.926 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.926 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:41.926 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:41.926 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.926 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:41.926 "name": "Existed_Raid", 00:27:41.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.926 "strip_size_kb": 64, 00:27:41.926 "state": "configuring", 00:27:41.926 "raid_level": "concat", 00:27:41.926 "superblock": false, 00:27:41.926 "num_base_bdevs": 4, 00:27:41.926 "num_base_bdevs_discovered": 1, 00:27:41.926 "num_base_bdevs_operational": 4, 00:27:41.926 "base_bdevs_list": [ 00:27:41.926 { 00:27:41.926 "name": "BaseBdev1", 00:27:41.926 "uuid": "1fd961a9-2814-49cd-95c3-c8b873760a09", 00:27:41.926 "is_configured": true, 00:27:41.926 "data_offset": 0, 00:27:41.926 "data_size": 65536 00:27:41.926 }, 00:27:41.926 { 00:27:41.926 "name": "BaseBdev2", 00:27:41.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.926 "is_configured": false, 00:27:41.926 "data_offset": 0, 00:27:41.926 "data_size": 0 00:27:41.926 }, 00:27:41.926 { 00:27:41.926 "name": "BaseBdev3", 00:27:41.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.926 "is_configured": false, 00:27:41.926 "data_offset": 0, 00:27:41.926 "data_size": 0 00:27:41.926 }, 00:27:41.926 { 00:27:41.926 "name": "BaseBdev4", 00:27:41.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.926 "is_configured": false, 00:27:41.926 "data_offset": 0, 00:27:41.926 "data_size": 0 00:27:41.926 } 00:27:41.926 ] 00:27:41.926 }' 00:27:41.926 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:41.926 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:42.183 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:27:42.183 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.183 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:42.183 [2024-11-05 15:57:14.585011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:42.183 BaseBdev2 00:27:42.183 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.184 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:27:42.184 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:27:42.184 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:42.184 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:27:42.184 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:42.184 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:42.184 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:42.184 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.184 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:42.184 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.184 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:42.184 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.184 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:42.443 [ 00:27:42.443 { 00:27:42.443 "name": "BaseBdev2", 00:27:42.443 "aliases": [ 00:27:42.443 "735c3d88-9829-4c6a-a455-c009a4b32b63" 00:27:42.443 ], 00:27:42.443 "product_name": "Malloc disk", 00:27:42.443 "block_size": 512, 00:27:42.443 "num_blocks": 65536, 00:27:42.443 "uuid": "735c3d88-9829-4c6a-a455-c009a4b32b63", 00:27:42.443 "assigned_rate_limits": { 00:27:42.443 "rw_ios_per_sec": 0, 00:27:42.443 "rw_mbytes_per_sec": 0, 00:27:42.443 "r_mbytes_per_sec": 0, 00:27:42.443 "w_mbytes_per_sec": 0 00:27:42.443 }, 00:27:42.443 "claimed": true, 00:27:42.443 "claim_type": "exclusive_write", 00:27:42.443 "zoned": false, 00:27:42.443 "supported_io_types": { 00:27:42.443 "read": true, 00:27:42.443 "write": true, 00:27:42.443 "unmap": true, 00:27:42.443 "flush": true, 00:27:42.443 "reset": true, 00:27:42.443 "nvme_admin": false, 00:27:42.443 "nvme_io": false, 00:27:42.443 "nvme_io_md": false, 00:27:42.443 "write_zeroes": true, 00:27:42.443 "zcopy": true, 00:27:42.443 "get_zone_info": false, 00:27:42.443 "zone_management": false, 00:27:42.443 "zone_append": false, 00:27:42.443 "compare": false, 00:27:42.443 "compare_and_write": false, 00:27:42.443 "abort": true, 00:27:42.443 "seek_hole": false, 00:27:42.443 "seek_data": false, 00:27:42.443 "copy": true, 00:27:42.443 "nvme_iov_md": false 00:27:42.443 }, 00:27:42.443 "memory_domains": [ 00:27:42.443 { 00:27:42.443 "dma_device_id": "system", 00:27:42.443 "dma_device_type": 1 00:27:42.443 }, 00:27:42.443 { 00:27:42.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:42.443 "dma_device_type": 2 00:27:42.443 } 00:27:42.443 ], 00:27:42.443 "driver_specific": {} 00:27:42.443 } 00:27:42.443 ] 00:27:42.443 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.443 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:27:42.443 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:42.443 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:42.443 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:42.443 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:42.443 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:42.443 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:42.443 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:42.443 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:42.443 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:42.443 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:42.443 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:42.443 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:42.443 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:42.443 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:42.443 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.443 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:42.443 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.443 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:42.443 "name": "Existed_Raid", 00:27:42.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:42.443 "strip_size_kb": 64, 00:27:42.443 "state": "configuring", 00:27:42.443 "raid_level": "concat", 00:27:42.443 "superblock": false, 00:27:42.443 "num_base_bdevs": 4, 00:27:42.443 "num_base_bdevs_discovered": 2, 00:27:42.443 "num_base_bdevs_operational": 4, 00:27:42.443 "base_bdevs_list": [ 00:27:42.443 { 00:27:42.443 "name": "BaseBdev1", 00:27:42.443 "uuid": "1fd961a9-2814-49cd-95c3-c8b873760a09", 00:27:42.443 "is_configured": true, 00:27:42.443 "data_offset": 0, 00:27:42.443 "data_size": 65536 00:27:42.443 }, 00:27:42.443 { 00:27:42.443 "name": "BaseBdev2", 00:27:42.443 "uuid": "735c3d88-9829-4c6a-a455-c009a4b32b63", 00:27:42.443 "is_configured": true, 00:27:42.443 "data_offset": 0, 00:27:42.443 "data_size": 65536 00:27:42.443 }, 00:27:42.443 { 00:27:42.443 "name": "BaseBdev3", 00:27:42.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:42.443 "is_configured": false, 00:27:42.443 "data_offset": 0, 00:27:42.443 "data_size": 0 00:27:42.443 }, 00:27:42.443 { 00:27:42.443 "name": "BaseBdev4", 00:27:42.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:42.443 "is_configured": false, 00:27:42.443 "data_offset": 0, 00:27:42.443 "data_size": 0 00:27:42.443 } 00:27:42.443 ] 00:27:42.443 }' 00:27:42.443 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:42.443 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:42.719 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:27:42.719 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.719 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:42.719 [2024-11-05 15:57:14.950290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:42.719 BaseBdev3 00:27:42.719 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.719 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:27:42.719 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:27:42.719 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:42.719 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:27:42.719 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:42.719 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:42.719 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:42.719 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.719 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:42.719 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.719 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:42.719 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.719 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:42.719 [ 00:27:42.719 { 00:27:42.719 "name": "BaseBdev3", 00:27:42.719 "aliases": [ 00:27:42.719 "461b4158-9341-4d2c-8d97-1862f4686aca" 00:27:42.719 ], 00:27:42.719 "product_name": "Malloc disk", 00:27:42.719 "block_size": 512, 00:27:42.719 "num_blocks": 65536, 00:27:42.719 "uuid": "461b4158-9341-4d2c-8d97-1862f4686aca", 00:27:42.719 "assigned_rate_limits": { 00:27:42.719 "rw_ios_per_sec": 0, 00:27:42.719 "rw_mbytes_per_sec": 0, 00:27:42.719 "r_mbytes_per_sec": 0, 00:27:42.719 "w_mbytes_per_sec": 0 00:27:42.719 }, 00:27:42.719 "claimed": true, 00:27:42.719 "claim_type": "exclusive_write", 00:27:42.719 "zoned": false, 00:27:42.719 "supported_io_types": { 00:27:42.719 "read": true, 00:27:42.719 "write": true, 00:27:42.719 "unmap": true, 00:27:42.719 "flush": true, 00:27:42.719 "reset": true, 00:27:42.719 "nvme_admin": false, 00:27:42.719 "nvme_io": false, 00:27:42.719 "nvme_io_md": false, 00:27:42.719 "write_zeroes": true, 00:27:42.719 "zcopy": true, 00:27:42.719 "get_zone_info": false, 00:27:42.719 "zone_management": false, 00:27:42.719 "zone_append": false, 00:27:42.719 "compare": false, 00:27:42.719 "compare_and_write": false, 00:27:42.719 "abort": true, 00:27:42.719 "seek_hole": false, 00:27:42.719 "seek_data": false, 00:27:42.719 "copy": true, 00:27:42.719 "nvme_iov_md": false 00:27:42.719 }, 00:27:42.719 "memory_domains": [ 00:27:42.719 { 00:27:42.719 "dma_device_id": "system", 00:27:42.719 "dma_device_type": 1 00:27:42.719 }, 00:27:42.719 { 00:27:42.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:42.719 "dma_device_type": 2 00:27:42.719 } 00:27:42.719 ], 00:27:42.719 "driver_specific": {} 00:27:42.719 } 00:27:42.719 ] 00:27:42.719 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.719 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:27:42.719 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:42.719 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:42.720 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:42.720 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:42.720 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:42.720 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:42.720 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:42.720 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:42.720 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:42.720 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:42.720 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:42.720 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:42.720 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:42.720 15:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:42.720 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.720 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:42.720 15:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.720 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:42.720 "name": "Existed_Raid", 00:27:42.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:42.720 "strip_size_kb": 64, 00:27:42.720 "state": "configuring", 00:27:42.720 "raid_level": "concat", 00:27:42.720 "superblock": false, 00:27:42.720 "num_base_bdevs": 4, 00:27:42.720 "num_base_bdevs_discovered": 3, 00:27:42.720 "num_base_bdevs_operational": 4, 00:27:42.720 "base_bdevs_list": [ 00:27:42.720 { 00:27:42.720 "name": "BaseBdev1", 00:27:42.720 "uuid": "1fd961a9-2814-49cd-95c3-c8b873760a09", 00:27:42.720 "is_configured": true, 00:27:42.720 "data_offset": 0, 00:27:42.720 "data_size": 65536 00:27:42.720 }, 00:27:42.720 { 00:27:42.720 "name": "BaseBdev2", 00:27:42.720 "uuid": "735c3d88-9829-4c6a-a455-c009a4b32b63", 00:27:42.720 "is_configured": true, 00:27:42.720 "data_offset": 0, 00:27:42.720 "data_size": 65536 00:27:42.720 }, 00:27:42.720 { 00:27:42.720 "name": "BaseBdev3", 00:27:42.720 "uuid": "461b4158-9341-4d2c-8d97-1862f4686aca", 00:27:42.720 "is_configured": true, 00:27:42.720 "data_offset": 0, 00:27:42.720 "data_size": 65536 00:27:42.720 }, 00:27:42.720 { 00:27:42.720 "name": "BaseBdev4", 00:27:42.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:42.720 "is_configured": false, 00:27:42.720 "data_offset": 0, 00:27:42.720 "data_size": 0 00:27:42.720 } 00:27:42.720 ] 00:27:42.720 }' 00:27:42.720 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:42.720 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:42.978 [2024-11-05 15:57:15.292613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:42.978 [2024-11-05 15:57:15.292647] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:42.978 [2024-11-05 15:57:15.292654] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:27:42.978 [2024-11-05 15:57:15.292882] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:42.978 [2024-11-05 15:57:15.293012] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:42.978 [2024-11-05 15:57:15.293021] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:27:42.978 [2024-11-05 15:57:15.293204] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:42.978 BaseBdev4 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:42.978 [ 00:27:42.978 { 00:27:42.978 "name": "BaseBdev4", 00:27:42.978 "aliases": [ 00:27:42.978 "08220ea4-8a43-43a4-98d9-36f81d3c545f" 00:27:42.978 ], 00:27:42.978 "product_name": "Malloc disk", 00:27:42.978 "block_size": 512, 00:27:42.978 "num_blocks": 65536, 00:27:42.978 "uuid": "08220ea4-8a43-43a4-98d9-36f81d3c545f", 00:27:42.978 "assigned_rate_limits": { 00:27:42.978 "rw_ios_per_sec": 0, 00:27:42.978 "rw_mbytes_per_sec": 0, 00:27:42.978 "r_mbytes_per_sec": 0, 00:27:42.978 "w_mbytes_per_sec": 0 00:27:42.978 }, 00:27:42.978 "claimed": true, 00:27:42.978 "claim_type": "exclusive_write", 00:27:42.978 "zoned": false, 00:27:42.978 "supported_io_types": { 00:27:42.978 "read": true, 00:27:42.978 "write": true, 00:27:42.978 "unmap": true, 00:27:42.978 "flush": true, 00:27:42.978 "reset": true, 00:27:42.978 "nvme_admin": false, 00:27:42.978 "nvme_io": false, 00:27:42.978 "nvme_io_md": false, 00:27:42.978 "write_zeroes": true, 00:27:42.978 "zcopy": true, 00:27:42.978 "get_zone_info": false, 00:27:42.978 "zone_management": false, 00:27:42.978 "zone_append": false, 00:27:42.978 "compare": false, 00:27:42.978 "compare_and_write": false, 00:27:42.978 "abort": true, 00:27:42.978 "seek_hole": false, 00:27:42.978 "seek_data": false, 00:27:42.978 "copy": true, 00:27:42.978 "nvme_iov_md": false 00:27:42.978 }, 00:27:42.978 "memory_domains": [ 00:27:42.978 { 00:27:42.978 "dma_device_id": "system", 00:27:42.978 "dma_device_type": 1 00:27:42.978 }, 00:27:42.978 { 00:27:42.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:42.978 "dma_device_type": 2 00:27:42.978 } 00:27:42.978 ], 00:27:42.978 "driver_specific": {} 00:27:42.978 } 00:27:42.978 ] 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:42.978 "name": "Existed_Raid", 00:27:42.978 "uuid": "15598ec2-132b-40db-84a8-8764a0c8f369", 00:27:42.978 "strip_size_kb": 64, 00:27:42.978 "state": "online", 00:27:42.978 "raid_level": "concat", 00:27:42.978 "superblock": false, 00:27:42.978 "num_base_bdevs": 4, 00:27:42.978 "num_base_bdevs_discovered": 4, 00:27:42.978 "num_base_bdevs_operational": 4, 00:27:42.978 "base_bdevs_list": [ 00:27:42.978 { 00:27:42.978 "name": "BaseBdev1", 00:27:42.978 "uuid": "1fd961a9-2814-49cd-95c3-c8b873760a09", 00:27:42.978 "is_configured": true, 00:27:42.978 "data_offset": 0, 00:27:42.978 "data_size": 65536 00:27:42.978 }, 00:27:42.978 { 00:27:42.978 "name": "BaseBdev2", 00:27:42.978 "uuid": "735c3d88-9829-4c6a-a455-c009a4b32b63", 00:27:42.978 "is_configured": true, 00:27:42.978 "data_offset": 0, 00:27:42.978 "data_size": 65536 00:27:42.978 }, 00:27:42.978 { 00:27:42.978 "name": "BaseBdev3", 00:27:42.978 "uuid": "461b4158-9341-4d2c-8d97-1862f4686aca", 00:27:42.978 "is_configured": true, 00:27:42.978 "data_offset": 0, 00:27:42.978 "data_size": 65536 00:27:42.978 }, 00:27:42.978 { 00:27:42.978 "name": "BaseBdev4", 00:27:42.978 "uuid": "08220ea4-8a43-43a4-98d9-36f81d3c545f", 00:27:42.978 "is_configured": true, 00:27:42.978 "data_offset": 0, 00:27:42.978 "data_size": 65536 00:27:42.978 } 00:27:42.978 ] 00:27:42.978 }' 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:42.978 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:43.236 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:27:43.236 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:27:43.236 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:43.236 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:43.236 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:27:43.236 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:43.236 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:27:43.236 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.236 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:43.236 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:43.236 [2024-11-05 15:57:15.621035] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:43.236 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.236 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:43.236 "name": "Existed_Raid", 00:27:43.236 "aliases": [ 00:27:43.236 "15598ec2-132b-40db-84a8-8764a0c8f369" 00:27:43.236 ], 00:27:43.236 "product_name": "Raid Volume", 00:27:43.236 "block_size": 512, 00:27:43.236 "num_blocks": 262144, 00:27:43.236 "uuid": "15598ec2-132b-40db-84a8-8764a0c8f369", 00:27:43.236 "assigned_rate_limits": { 00:27:43.236 "rw_ios_per_sec": 0, 00:27:43.236 "rw_mbytes_per_sec": 0, 00:27:43.236 "r_mbytes_per_sec": 0, 00:27:43.236 "w_mbytes_per_sec": 0 00:27:43.236 }, 00:27:43.236 "claimed": false, 00:27:43.236 "zoned": false, 00:27:43.236 "supported_io_types": { 00:27:43.236 "read": true, 00:27:43.236 "write": true, 00:27:43.236 "unmap": true, 00:27:43.236 "flush": true, 00:27:43.236 "reset": true, 00:27:43.236 "nvme_admin": false, 00:27:43.236 "nvme_io": false, 00:27:43.236 "nvme_io_md": false, 00:27:43.236 "write_zeroes": true, 00:27:43.236 "zcopy": false, 00:27:43.236 "get_zone_info": false, 00:27:43.236 "zone_management": false, 00:27:43.236 "zone_append": false, 00:27:43.236 "compare": false, 00:27:43.236 "compare_and_write": false, 00:27:43.236 "abort": false, 00:27:43.236 "seek_hole": false, 00:27:43.236 "seek_data": false, 00:27:43.236 "copy": false, 00:27:43.236 "nvme_iov_md": false 00:27:43.236 }, 00:27:43.236 "memory_domains": [ 00:27:43.236 { 00:27:43.236 "dma_device_id": "system", 00:27:43.236 "dma_device_type": 1 00:27:43.236 }, 00:27:43.236 { 00:27:43.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:43.236 "dma_device_type": 2 00:27:43.236 }, 00:27:43.236 { 00:27:43.236 "dma_device_id": "system", 00:27:43.236 "dma_device_type": 1 00:27:43.236 }, 00:27:43.236 { 00:27:43.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:43.236 "dma_device_type": 2 00:27:43.236 }, 00:27:43.236 { 00:27:43.236 "dma_device_id": "system", 00:27:43.236 "dma_device_type": 1 00:27:43.236 }, 00:27:43.236 { 00:27:43.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:43.236 "dma_device_type": 2 00:27:43.236 }, 00:27:43.236 { 00:27:43.236 "dma_device_id": "system", 00:27:43.236 "dma_device_type": 1 00:27:43.236 }, 00:27:43.236 { 00:27:43.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:43.236 "dma_device_type": 2 00:27:43.236 } 00:27:43.236 ], 00:27:43.236 "driver_specific": { 00:27:43.236 "raid": { 00:27:43.236 "uuid": "15598ec2-132b-40db-84a8-8764a0c8f369", 00:27:43.236 "strip_size_kb": 64, 00:27:43.236 "state": "online", 00:27:43.236 "raid_level": "concat", 00:27:43.236 "superblock": false, 00:27:43.236 "num_base_bdevs": 4, 00:27:43.236 "num_base_bdevs_discovered": 4, 00:27:43.236 "num_base_bdevs_operational": 4, 00:27:43.236 "base_bdevs_list": [ 00:27:43.236 { 00:27:43.236 "name": "BaseBdev1", 00:27:43.236 "uuid": "1fd961a9-2814-49cd-95c3-c8b873760a09", 00:27:43.236 "is_configured": true, 00:27:43.236 "data_offset": 0, 00:27:43.236 "data_size": 65536 00:27:43.236 }, 00:27:43.236 { 00:27:43.236 "name": "BaseBdev2", 00:27:43.236 "uuid": "735c3d88-9829-4c6a-a455-c009a4b32b63", 00:27:43.236 "is_configured": true, 00:27:43.236 "data_offset": 0, 00:27:43.236 "data_size": 65536 00:27:43.236 }, 00:27:43.236 { 00:27:43.236 "name": "BaseBdev3", 00:27:43.236 "uuid": "461b4158-9341-4d2c-8d97-1862f4686aca", 00:27:43.236 "is_configured": true, 00:27:43.236 "data_offset": 0, 00:27:43.236 "data_size": 65536 00:27:43.236 }, 00:27:43.236 { 00:27:43.236 "name": "BaseBdev4", 00:27:43.236 "uuid": "08220ea4-8a43-43a4-98d9-36f81d3c545f", 00:27:43.236 "is_configured": true, 00:27:43.236 "data_offset": 0, 00:27:43.236 "data_size": 65536 00:27:43.236 } 00:27:43.236 ] 00:27:43.236 } 00:27:43.236 } 00:27:43.236 }' 00:27:43.494 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:43.494 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:27:43.494 BaseBdev2 00:27:43.494 BaseBdev3 00:27:43.494 BaseBdev4' 00:27:43.494 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:43.494 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:43.494 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:43.494 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:27:43.494 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.494 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:43.494 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:43.494 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.494 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:43.494 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:43.494 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:43.494 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:27:43.494 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.494 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:43.494 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:43.494 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.494 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:43.494 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:43.494 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:43.494 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:27:43.494 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.494 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:43.494 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:43.494 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.494 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:43.494 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:43.494 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:43.494 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:43.494 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:27:43.494 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.494 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:43.494 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.494 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:43.494 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:43.495 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:27:43.495 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.495 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:43.495 [2024-11-05 15:57:15.840793] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:43.495 [2024-11-05 15:57:15.840907] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:43.495 [2024-11-05 15:57:15.840956] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:43.495 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.495 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:27:43.495 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:27:43.495 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:43.495 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:27:43.495 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:27:43.495 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:27:43.495 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:43.495 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:27:43.495 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:43.495 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:43.495 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:43.495 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:43.495 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:43.495 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:43.495 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:43.495 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:43.495 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:43.495 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.495 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:43.495 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.752 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:43.752 "name": "Existed_Raid", 00:27:43.752 "uuid": "15598ec2-132b-40db-84a8-8764a0c8f369", 00:27:43.752 "strip_size_kb": 64, 00:27:43.752 "state": "offline", 00:27:43.752 "raid_level": "concat", 00:27:43.752 "superblock": false, 00:27:43.752 "num_base_bdevs": 4, 00:27:43.752 "num_base_bdevs_discovered": 3, 00:27:43.752 "num_base_bdevs_operational": 3, 00:27:43.752 "base_bdevs_list": [ 00:27:43.752 { 00:27:43.752 "name": null, 00:27:43.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:43.752 "is_configured": false, 00:27:43.752 "data_offset": 0, 00:27:43.752 "data_size": 65536 00:27:43.752 }, 00:27:43.752 { 00:27:43.752 "name": "BaseBdev2", 00:27:43.752 "uuid": "735c3d88-9829-4c6a-a455-c009a4b32b63", 00:27:43.752 "is_configured": true, 00:27:43.752 "data_offset": 0, 00:27:43.752 "data_size": 65536 00:27:43.752 }, 00:27:43.752 { 00:27:43.752 "name": "BaseBdev3", 00:27:43.752 "uuid": "461b4158-9341-4d2c-8d97-1862f4686aca", 00:27:43.752 "is_configured": true, 00:27:43.752 "data_offset": 0, 00:27:43.752 "data_size": 65536 00:27:43.752 }, 00:27:43.752 { 00:27:43.752 "name": "BaseBdev4", 00:27:43.752 "uuid": "08220ea4-8a43-43a4-98d9-36f81d3c545f", 00:27:43.752 "is_configured": true, 00:27:43.752 "data_offset": 0, 00:27:43.752 "data_size": 65536 00:27:43.752 } 00:27:43.752 ] 00:27:43.752 }' 00:27:43.752 15:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:43.752 15:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.009 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:27:44.009 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:44.009 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:44.009 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:44.009 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.009 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.009 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.009 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:44.009 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:44.009 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:27:44.009 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.009 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.009 [2024-11-05 15:57:16.231597] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:44.009 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.009 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:44.009 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:44.009 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:44.009 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.009 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.009 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:44.010 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.010 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:44.010 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:44.010 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:27:44.010 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.010 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.010 [2024-11-05 15:57:16.313610] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:44.010 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.010 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:44.010 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:44.010 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:44.010 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.010 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:44.010 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.010 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.010 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:44.010 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:44.010 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:27:44.010 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.010 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.010 [2024-11-05 15:57:16.387449] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:27:44.010 [2024-11-05 15:57:16.387565] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:27:44.267 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.267 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:44.267 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:44.267 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:44.267 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:27:44.267 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.267 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.267 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.268 BaseBdev2 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.268 [ 00:27:44.268 { 00:27:44.268 "name": "BaseBdev2", 00:27:44.268 "aliases": [ 00:27:44.268 "bd0942a2-4b3b-4a81-a58f-79e20ba092e4" 00:27:44.268 ], 00:27:44.268 "product_name": "Malloc disk", 00:27:44.268 "block_size": 512, 00:27:44.268 "num_blocks": 65536, 00:27:44.268 "uuid": "bd0942a2-4b3b-4a81-a58f-79e20ba092e4", 00:27:44.268 "assigned_rate_limits": { 00:27:44.268 "rw_ios_per_sec": 0, 00:27:44.268 "rw_mbytes_per_sec": 0, 00:27:44.268 "r_mbytes_per_sec": 0, 00:27:44.268 "w_mbytes_per_sec": 0 00:27:44.268 }, 00:27:44.268 "claimed": false, 00:27:44.268 "zoned": false, 00:27:44.268 "supported_io_types": { 00:27:44.268 "read": true, 00:27:44.268 "write": true, 00:27:44.268 "unmap": true, 00:27:44.268 "flush": true, 00:27:44.268 "reset": true, 00:27:44.268 "nvme_admin": false, 00:27:44.268 "nvme_io": false, 00:27:44.268 "nvme_io_md": false, 00:27:44.268 "write_zeroes": true, 00:27:44.268 "zcopy": true, 00:27:44.268 "get_zone_info": false, 00:27:44.268 "zone_management": false, 00:27:44.268 "zone_append": false, 00:27:44.268 "compare": false, 00:27:44.268 "compare_and_write": false, 00:27:44.268 "abort": true, 00:27:44.268 "seek_hole": false, 00:27:44.268 "seek_data": false, 00:27:44.268 "copy": true, 00:27:44.268 "nvme_iov_md": false 00:27:44.268 }, 00:27:44.268 "memory_domains": [ 00:27:44.268 { 00:27:44.268 "dma_device_id": "system", 00:27:44.268 "dma_device_type": 1 00:27:44.268 }, 00:27:44.268 { 00:27:44.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:44.268 "dma_device_type": 2 00:27:44.268 } 00:27:44.268 ], 00:27:44.268 "driver_specific": {} 00:27:44.268 } 00:27:44.268 ] 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.268 BaseBdev3 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.268 [ 00:27:44.268 { 00:27:44.268 "name": "BaseBdev3", 00:27:44.268 "aliases": [ 00:27:44.268 "6cd43be8-fc59-4b93-b69c-02be39645df8" 00:27:44.268 ], 00:27:44.268 "product_name": "Malloc disk", 00:27:44.268 "block_size": 512, 00:27:44.268 "num_blocks": 65536, 00:27:44.268 "uuid": "6cd43be8-fc59-4b93-b69c-02be39645df8", 00:27:44.268 "assigned_rate_limits": { 00:27:44.268 "rw_ios_per_sec": 0, 00:27:44.268 "rw_mbytes_per_sec": 0, 00:27:44.268 "r_mbytes_per_sec": 0, 00:27:44.268 "w_mbytes_per_sec": 0 00:27:44.268 }, 00:27:44.268 "claimed": false, 00:27:44.268 "zoned": false, 00:27:44.268 "supported_io_types": { 00:27:44.268 "read": true, 00:27:44.268 "write": true, 00:27:44.268 "unmap": true, 00:27:44.268 "flush": true, 00:27:44.268 "reset": true, 00:27:44.268 "nvme_admin": false, 00:27:44.268 "nvme_io": false, 00:27:44.268 "nvme_io_md": false, 00:27:44.268 "write_zeroes": true, 00:27:44.268 "zcopy": true, 00:27:44.268 "get_zone_info": false, 00:27:44.268 "zone_management": false, 00:27:44.268 "zone_append": false, 00:27:44.268 "compare": false, 00:27:44.268 "compare_and_write": false, 00:27:44.268 "abort": true, 00:27:44.268 "seek_hole": false, 00:27:44.268 "seek_data": false, 00:27:44.268 "copy": true, 00:27:44.268 "nvme_iov_md": false 00:27:44.268 }, 00:27:44.268 "memory_domains": [ 00:27:44.268 { 00:27:44.268 "dma_device_id": "system", 00:27:44.268 "dma_device_type": 1 00:27:44.268 }, 00:27:44.268 { 00:27:44.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:44.268 "dma_device_type": 2 00:27:44.268 } 00:27:44.268 ], 00:27:44.268 "driver_specific": {} 00:27:44.268 } 00:27:44.268 ] 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.268 BaseBdev4 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.268 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.268 [ 00:27:44.269 { 00:27:44.269 "name": "BaseBdev4", 00:27:44.269 "aliases": [ 00:27:44.269 "83e409bc-48f8-47a2-8da0-34c21a66e8de" 00:27:44.269 ], 00:27:44.269 "product_name": "Malloc disk", 00:27:44.269 "block_size": 512, 00:27:44.269 "num_blocks": 65536, 00:27:44.269 "uuid": "83e409bc-48f8-47a2-8da0-34c21a66e8de", 00:27:44.269 "assigned_rate_limits": { 00:27:44.269 "rw_ios_per_sec": 0, 00:27:44.269 "rw_mbytes_per_sec": 0, 00:27:44.269 "r_mbytes_per_sec": 0, 00:27:44.269 "w_mbytes_per_sec": 0 00:27:44.269 }, 00:27:44.269 "claimed": false, 00:27:44.269 "zoned": false, 00:27:44.269 "supported_io_types": { 00:27:44.269 "read": true, 00:27:44.269 "write": true, 00:27:44.269 "unmap": true, 00:27:44.269 "flush": true, 00:27:44.269 "reset": true, 00:27:44.269 "nvme_admin": false, 00:27:44.269 "nvme_io": false, 00:27:44.269 "nvme_io_md": false, 00:27:44.269 "write_zeroes": true, 00:27:44.269 "zcopy": true, 00:27:44.269 "get_zone_info": false, 00:27:44.269 "zone_management": false, 00:27:44.269 "zone_append": false, 00:27:44.269 "compare": false, 00:27:44.269 "compare_and_write": false, 00:27:44.269 "abort": true, 00:27:44.269 "seek_hole": false, 00:27:44.269 "seek_data": false, 00:27:44.269 "copy": true, 00:27:44.269 "nvme_iov_md": false 00:27:44.269 }, 00:27:44.269 "memory_domains": [ 00:27:44.269 { 00:27:44.269 "dma_device_id": "system", 00:27:44.269 "dma_device_type": 1 00:27:44.269 }, 00:27:44.269 { 00:27:44.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:44.269 "dma_device_type": 2 00:27:44.269 } 00:27:44.269 ], 00:27:44.269 "driver_specific": {} 00:27:44.269 } 00:27:44.269 ] 00:27:44.269 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.269 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:27:44.269 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:44.269 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:44.269 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:44.269 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.269 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.269 [2024-11-05 15:57:16.621160] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:44.269 [2024-11-05 15:57:16.621198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:44.269 [2024-11-05 15:57:16.621214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:44.269 [2024-11-05 15:57:16.622704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:44.269 [2024-11-05 15:57:16.622748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:44.269 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.269 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:44.269 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:44.269 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:44.269 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:44.269 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:44.269 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:44.269 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:44.269 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:44.269 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:44.269 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:44.269 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:44.269 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.269 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.269 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:44.269 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.269 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:44.269 "name": "Existed_Raid", 00:27:44.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:44.269 "strip_size_kb": 64, 00:27:44.269 "state": "configuring", 00:27:44.269 "raid_level": "concat", 00:27:44.269 "superblock": false, 00:27:44.269 "num_base_bdevs": 4, 00:27:44.269 "num_base_bdevs_discovered": 3, 00:27:44.269 "num_base_bdevs_operational": 4, 00:27:44.269 "base_bdevs_list": [ 00:27:44.269 { 00:27:44.269 "name": "BaseBdev1", 00:27:44.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:44.269 "is_configured": false, 00:27:44.269 "data_offset": 0, 00:27:44.269 "data_size": 0 00:27:44.269 }, 00:27:44.269 { 00:27:44.269 "name": "BaseBdev2", 00:27:44.269 "uuid": "bd0942a2-4b3b-4a81-a58f-79e20ba092e4", 00:27:44.269 "is_configured": true, 00:27:44.269 "data_offset": 0, 00:27:44.269 "data_size": 65536 00:27:44.269 }, 00:27:44.269 { 00:27:44.269 "name": "BaseBdev3", 00:27:44.269 "uuid": "6cd43be8-fc59-4b93-b69c-02be39645df8", 00:27:44.269 "is_configured": true, 00:27:44.269 "data_offset": 0, 00:27:44.269 "data_size": 65536 00:27:44.269 }, 00:27:44.269 { 00:27:44.269 "name": "BaseBdev4", 00:27:44.269 "uuid": "83e409bc-48f8-47a2-8da0-34c21a66e8de", 00:27:44.269 "is_configured": true, 00:27:44.269 "data_offset": 0, 00:27:44.269 "data_size": 65536 00:27:44.269 } 00:27:44.269 ] 00:27:44.269 }' 00:27:44.269 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:44.269 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.527 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:27:44.527 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.527 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.527 [2024-11-05 15:57:16.921225] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:44.527 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.527 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:44.527 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:44.527 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:44.527 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:44.527 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:44.527 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:44.527 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:44.527 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:44.527 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:44.527 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:44.527 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:44.527 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:44.527 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.527 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.784 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.784 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:44.784 "name": "Existed_Raid", 00:27:44.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:44.784 "strip_size_kb": 64, 00:27:44.784 "state": "configuring", 00:27:44.784 "raid_level": "concat", 00:27:44.784 "superblock": false, 00:27:44.784 "num_base_bdevs": 4, 00:27:44.784 "num_base_bdevs_discovered": 2, 00:27:44.784 "num_base_bdevs_operational": 4, 00:27:44.784 "base_bdevs_list": [ 00:27:44.784 { 00:27:44.784 "name": "BaseBdev1", 00:27:44.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:44.784 "is_configured": false, 00:27:44.784 "data_offset": 0, 00:27:44.784 "data_size": 0 00:27:44.784 }, 00:27:44.784 { 00:27:44.784 "name": null, 00:27:44.784 "uuid": "bd0942a2-4b3b-4a81-a58f-79e20ba092e4", 00:27:44.784 "is_configured": false, 00:27:44.784 "data_offset": 0, 00:27:44.784 "data_size": 65536 00:27:44.784 }, 00:27:44.784 { 00:27:44.784 "name": "BaseBdev3", 00:27:44.784 "uuid": "6cd43be8-fc59-4b93-b69c-02be39645df8", 00:27:44.784 "is_configured": true, 00:27:44.784 "data_offset": 0, 00:27:44.784 "data_size": 65536 00:27:44.784 }, 00:27:44.784 { 00:27:44.784 "name": "BaseBdev4", 00:27:44.784 "uuid": "83e409bc-48f8-47a2-8da0-34c21a66e8de", 00:27:44.784 "is_configured": true, 00:27:44.784 "data_offset": 0, 00:27:44.784 "data_size": 65536 00:27:44.784 } 00:27:44.784 ] 00:27:44.784 }' 00:27:44.784 15:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:44.784 15:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:45.042 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:45.042 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.042 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:45.042 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:45.042 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.042 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:27:45.042 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:27:45.042 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.042 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:45.042 [2024-11-05 15:57:17.303695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:45.042 BaseBdev1 00:27:45.042 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.042 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:27:45.042 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:27:45.042 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:45.042 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:27:45.042 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:45.042 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:45.042 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:45.042 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.042 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:45.042 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.042 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:45.042 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.042 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:45.042 [ 00:27:45.042 { 00:27:45.042 "name": "BaseBdev1", 00:27:45.042 "aliases": [ 00:27:45.042 "a49d5026-08ad-4516-83d6-95eea58386ab" 00:27:45.042 ], 00:27:45.042 "product_name": "Malloc disk", 00:27:45.042 "block_size": 512, 00:27:45.042 "num_blocks": 65536, 00:27:45.042 "uuid": "a49d5026-08ad-4516-83d6-95eea58386ab", 00:27:45.042 "assigned_rate_limits": { 00:27:45.042 "rw_ios_per_sec": 0, 00:27:45.042 "rw_mbytes_per_sec": 0, 00:27:45.042 "r_mbytes_per_sec": 0, 00:27:45.042 "w_mbytes_per_sec": 0 00:27:45.042 }, 00:27:45.042 "claimed": true, 00:27:45.042 "claim_type": "exclusive_write", 00:27:45.042 "zoned": false, 00:27:45.042 "supported_io_types": { 00:27:45.042 "read": true, 00:27:45.042 "write": true, 00:27:45.042 "unmap": true, 00:27:45.042 "flush": true, 00:27:45.042 "reset": true, 00:27:45.042 "nvme_admin": false, 00:27:45.042 "nvme_io": false, 00:27:45.042 "nvme_io_md": false, 00:27:45.042 "write_zeroes": true, 00:27:45.042 "zcopy": true, 00:27:45.042 "get_zone_info": false, 00:27:45.042 "zone_management": false, 00:27:45.042 "zone_append": false, 00:27:45.042 "compare": false, 00:27:45.042 "compare_and_write": false, 00:27:45.042 "abort": true, 00:27:45.042 "seek_hole": false, 00:27:45.042 "seek_data": false, 00:27:45.042 "copy": true, 00:27:45.042 "nvme_iov_md": false 00:27:45.042 }, 00:27:45.042 "memory_domains": [ 00:27:45.042 { 00:27:45.042 "dma_device_id": "system", 00:27:45.042 "dma_device_type": 1 00:27:45.042 }, 00:27:45.042 { 00:27:45.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:45.042 "dma_device_type": 2 00:27:45.042 } 00:27:45.042 ], 00:27:45.042 "driver_specific": {} 00:27:45.042 } 00:27:45.042 ] 00:27:45.042 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.042 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:27:45.043 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:45.043 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:45.043 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:45.043 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:45.043 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:45.043 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:45.043 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:45.043 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:45.043 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:45.043 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:45.043 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:45.043 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:45.043 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.043 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:45.043 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.043 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:45.043 "name": "Existed_Raid", 00:27:45.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:45.043 "strip_size_kb": 64, 00:27:45.043 "state": "configuring", 00:27:45.043 "raid_level": "concat", 00:27:45.043 "superblock": false, 00:27:45.043 "num_base_bdevs": 4, 00:27:45.043 "num_base_bdevs_discovered": 3, 00:27:45.043 "num_base_bdevs_operational": 4, 00:27:45.043 "base_bdevs_list": [ 00:27:45.043 { 00:27:45.043 "name": "BaseBdev1", 00:27:45.043 "uuid": "a49d5026-08ad-4516-83d6-95eea58386ab", 00:27:45.043 "is_configured": true, 00:27:45.043 "data_offset": 0, 00:27:45.043 "data_size": 65536 00:27:45.043 }, 00:27:45.043 { 00:27:45.043 "name": null, 00:27:45.043 "uuid": "bd0942a2-4b3b-4a81-a58f-79e20ba092e4", 00:27:45.043 "is_configured": false, 00:27:45.043 "data_offset": 0, 00:27:45.043 "data_size": 65536 00:27:45.043 }, 00:27:45.043 { 00:27:45.043 "name": "BaseBdev3", 00:27:45.043 "uuid": "6cd43be8-fc59-4b93-b69c-02be39645df8", 00:27:45.043 "is_configured": true, 00:27:45.043 "data_offset": 0, 00:27:45.043 "data_size": 65536 00:27:45.043 }, 00:27:45.043 { 00:27:45.043 "name": "BaseBdev4", 00:27:45.043 "uuid": "83e409bc-48f8-47a2-8da0-34c21a66e8de", 00:27:45.043 "is_configured": true, 00:27:45.043 "data_offset": 0, 00:27:45.043 "data_size": 65536 00:27:45.043 } 00:27:45.043 ] 00:27:45.043 }' 00:27:45.043 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:45.043 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:45.300 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:45.300 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:45.300 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.300 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:45.300 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.300 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:27:45.300 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:27:45.300 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.300 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:45.300 [2024-11-05 15:57:17.667833] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:45.300 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.300 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:45.300 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:45.300 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:45.300 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:45.300 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:45.300 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:45.300 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:45.300 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:45.300 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:45.300 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:45.300 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:45.300 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:45.300 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.300 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:45.300 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.300 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:45.300 "name": "Existed_Raid", 00:27:45.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:45.300 "strip_size_kb": 64, 00:27:45.300 "state": "configuring", 00:27:45.300 "raid_level": "concat", 00:27:45.300 "superblock": false, 00:27:45.300 "num_base_bdevs": 4, 00:27:45.300 "num_base_bdevs_discovered": 2, 00:27:45.300 "num_base_bdevs_operational": 4, 00:27:45.300 "base_bdevs_list": [ 00:27:45.300 { 00:27:45.300 "name": "BaseBdev1", 00:27:45.300 "uuid": "a49d5026-08ad-4516-83d6-95eea58386ab", 00:27:45.300 "is_configured": true, 00:27:45.300 "data_offset": 0, 00:27:45.300 "data_size": 65536 00:27:45.300 }, 00:27:45.300 { 00:27:45.300 "name": null, 00:27:45.300 "uuid": "bd0942a2-4b3b-4a81-a58f-79e20ba092e4", 00:27:45.300 "is_configured": false, 00:27:45.300 "data_offset": 0, 00:27:45.300 "data_size": 65536 00:27:45.300 }, 00:27:45.300 { 00:27:45.300 "name": null, 00:27:45.300 "uuid": "6cd43be8-fc59-4b93-b69c-02be39645df8", 00:27:45.300 "is_configured": false, 00:27:45.300 "data_offset": 0, 00:27:45.300 "data_size": 65536 00:27:45.300 }, 00:27:45.300 { 00:27:45.300 "name": "BaseBdev4", 00:27:45.300 "uuid": "83e409bc-48f8-47a2-8da0-34c21a66e8de", 00:27:45.300 "is_configured": true, 00:27:45.300 "data_offset": 0, 00:27:45.300 "data_size": 65536 00:27:45.300 } 00:27:45.300 ] 00:27:45.300 }' 00:27:45.300 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:45.300 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:45.557 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:45.557 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:45.557 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.557 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:45.557 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.815 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:27:45.815 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:27:45.815 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.815 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:45.815 [2024-11-05 15:57:17.995906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:45.815 15:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.815 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:45.815 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:45.815 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:45.815 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:45.815 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:45.815 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:45.815 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:45.815 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:45.815 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:45.815 15:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:45.815 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:45.815 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:45.815 15:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.815 15:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:45.815 15:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.815 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:45.815 "name": "Existed_Raid", 00:27:45.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:45.815 "strip_size_kb": 64, 00:27:45.815 "state": "configuring", 00:27:45.815 "raid_level": "concat", 00:27:45.815 "superblock": false, 00:27:45.815 "num_base_bdevs": 4, 00:27:45.815 "num_base_bdevs_discovered": 3, 00:27:45.815 "num_base_bdevs_operational": 4, 00:27:45.815 "base_bdevs_list": [ 00:27:45.815 { 00:27:45.815 "name": "BaseBdev1", 00:27:45.815 "uuid": "a49d5026-08ad-4516-83d6-95eea58386ab", 00:27:45.815 "is_configured": true, 00:27:45.815 "data_offset": 0, 00:27:45.815 "data_size": 65536 00:27:45.815 }, 00:27:45.815 { 00:27:45.815 "name": null, 00:27:45.815 "uuid": "bd0942a2-4b3b-4a81-a58f-79e20ba092e4", 00:27:45.815 "is_configured": false, 00:27:45.815 "data_offset": 0, 00:27:45.815 "data_size": 65536 00:27:45.815 }, 00:27:45.815 { 00:27:45.815 "name": "BaseBdev3", 00:27:45.815 "uuid": "6cd43be8-fc59-4b93-b69c-02be39645df8", 00:27:45.815 "is_configured": true, 00:27:45.815 "data_offset": 0, 00:27:45.815 "data_size": 65536 00:27:45.815 }, 00:27:45.815 { 00:27:45.815 "name": "BaseBdev4", 00:27:45.815 "uuid": "83e409bc-48f8-47a2-8da0-34c21a66e8de", 00:27:45.815 "is_configured": true, 00:27:45.815 "data_offset": 0, 00:27:45.815 "data_size": 65536 00:27:45.815 } 00:27:45.815 ] 00:27:45.815 }' 00:27:45.815 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:45.815 15:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:46.072 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:46.072 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:46.072 15:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.072 15:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:46.072 15:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.072 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:27:46.072 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:27:46.072 15:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.072 15:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:46.072 [2024-11-05 15:57:18.379982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:46.072 15:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.072 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:46.072 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:46.072 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:46.072 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:46.072 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:46.072 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:46.072 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:46.072 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:46.072 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:46.072 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:46.072 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:46.072 15:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.072 15:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:46.072 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:46.072 15:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.072 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:46.072 "name": "Existed_Raid", 00:27:46.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:46.072 "strip_size_kb": 64, 00:27:46.072 "state": "configuring", 00:27:46.072 "raid_level": "concat", 00:27:46.072 "superblock": false, 00:27:46.072 "num_base_bdevs": 4, 00:27:46.072 "num_base_bdevs_discovered": 2, 00:27:46.072 "num_base_bdevs_operational": 4, 00:27:46.072 "base_bdevs_list": [ 00:27:46.072 { 00:27:46.072 "name": null, 00:27:46.072 "uuid": "a49d5026-08ad-4516-83d6-95eea58386ab", 00:27:46.072 "is_configured": false, 00:27:46.072 "data_offset": 0, 00:27:46.072 "data_size": 65536 00:27:46.072 }, 00:27:46.072 { 00:27:46.072 "name": null, 00:27:46.072 "uuid": "bd0942a2-4b3b-4a81-a58f-79e20ba092e4", 00:27:46.072 "is_configured": false, 00:27:46.072 "data_offset": 0, 00:27:46.072 "data_size": 65536 00:27:46.072 }, 00:27:46.072 { 00:27:46.072 "name": "BaseBdev3", 00:27:46.072 "uuid": "6cd43be8-fc59-4b93-b69c-02be39645df8", 00:27:46.072 "is_configured": true, 00:27:46.072 "data_offset": 0, 00:27:46.072 "data_size": 65536 00:27:46.072 }, 00:27:46.072 { 00:27:46.072 "name": "BaseBdev4", 00:27:46.072 "uuid": "83e409bc-48f8-47a2-8da0-34c21a66e8de", 00:27:46.072 "is_configured": true, 00:27:46.072 "data_offset": 0, 00:27:46.072 "data_size": 65536 00:27:46.072 } 00:27:46.072 ] 00:27:46.072 }' 00:27:46.072 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:46.073 15:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:46.330 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:46.330 15:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.330 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:46.330 15:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:46.330 15:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.588 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:27:46.588 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:27:46.588 15:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.588 15:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:46.588 [2024-11-05 15:57:18.766052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:46.588 15:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.588 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:46.588 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:46.588 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:46.588 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:46.588 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:46.588 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:46.588 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:46.588 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:46.588 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:46.588 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:46.588 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:46.588 15:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.588 15:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:46.588 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:46.588 15:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.588 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:46.588 "name": "Existed_Raid", 00:27:46.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:46.588 "strip_size_kb": 64, 00:27:46.588 "state": "configuring", 00:27:46.588 "raid_level": "concat", 00:27:46.588 "superblock": false, 00:27:46.588 "num_base_bdevs": 4, 00:27:46.588 "num_base_bdevs_discovered": 3, 00:27:46.588 "num_base_bdevs_operational": 4, 00:27:46.588 "base_bdevs_list": [ 00:27:46.588 { 00:27:46.588 "name": null, 00:27:46.588 "uuid": "a49d5026-08ad-4516-83d6-95eea58386ab", 00:27:46.588 "is_configured": false, 00:27:46.588 "data_offset": 0, 00:27:46.588 "data_size": 65536 00:27:46.588 }, 00:27:46.588 { 00:27:46.588 "name": "BaseBdev2", 00:27:46.588 "uuid": "bd0942a2-4b3b-4a81-a58f-79e20ba092e4", 00:27:46.588 "is_configured": true, 00:27:46.588 "data_offset": 0, 00:27:46.588 "data_size": 65536 00:27:46.588 }, 00:27:46.588 { 00:27:46.588 "name": "BaseBdev3", 00:27:46.588 "uuid": "6cd43be8-fc59-4b93-b69c-02be39645df8", 00:27:46.588 "is_configured": true, 00:27:46.588 "data_offset": 0, 00:27:46.588 "data_size": 65536 00:27:46.588 }, 00:27:46.588 { 00:27:46.588 "name": "BaseBdev4", 00:27:46.588 "uuid": "83e409bc-48f8-47a2-8da0-34c21a66e8de", 00:27:46.588 "is_configured": true, 00:27:46.588 "data_offset": 0, 00:27:46.588 "data_size": 65536 00:27:46.588 } 00:27:46.588 ] 00:27:46.588 }' 00:27:46.588 15:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:46.588 15:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a49d5026-08ad-4516-83d6-95eea58386ab 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:46.847 [2024-11-05 15:57:19.160209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:27:46.847 [2024-11-05 15:57:19.160245] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:27:46.847 [2024-11-05 15:57:19.160251] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:27:46.847 [2024-11-05 15:57:19.160450] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:27:46.847 [2024-11-05 15:57:19.160560] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:27:46.847 [2024-11-05 15:57:19.160574] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:27:46.847 [2024-11-05 15:57:19.160737] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:46.847 NewBaseBdev 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:46.847 [ 00:27:46.847 { 00:27:46.847 "name": "NewBaseBdev", 00:27:46.847 "aliases": [ 00:27:46.847 "a49d5026-08ad-4516-83d6-95eea58386ab" 00:27:46.847 ], 00:27:46.847 "product_name": "Malloc disk", 00:27:46.847 "block_size": 512, 00:27:46.847 "num_blocks": 65536, 00:27:46.847 "uuid": "a49d5026-08ad-4516-83d6-95eea58386ab", 00:27:46.847 "assigned_rate_limits": { 00:27:46.847 "rw_ios_per_sec": 0, 00:27:46.847 "rw_mbytes_per_sec": 0, 00:27:46.847 "r_mbytes_per_sec": 0, 00:27:46.847 "w_mbytes_per_sec": 0 00:27:46.847 }, 00:27:46.847 "claimed": true, 00:27:46.847 "claim_type": "exclusive_write", 00:27:46.847 "zoned": false, 00:27:46.847 "supported_io_types": { 00:27:46.847 "read": true, 00:27:46.847 "write": true, 00:27:46.847 "unmap": true, 00:27:46.847 "flush": true, 00:27:46.847 "reset": true, 00:27:46.847 "nvme_admin": false, 00:27:46.847 "nvme_io": false, 00:27:46.847 "nvme_io_md": false, 00:27:46.847 "write_zeroes": true, 00:27:46.847 "zcopy": true, 00:27:46.847 "get_zone_info": false, 00:27:46.847 "zone_management": false, 00:27:46.847 "zone_append": false, 00:27:46.847 "compare": false, 00:27:46.847 "compare_and_write": false, 00:27:46.847 "abort": true, 00:27:46.847 "seek_hole": false, 00:27:46.847 "seek_data": false, 00:27:46.847 "copy": true, 00:27:46.847 "nvme_iov_md": false 00:27:46.847 }, 00:27:46.847 "memory_domains": [ 00:27:46.847 { 00:27:46.847 "dma_device_id": "system", 00:27:46.847 "dma_device_type": 1 00:27:46.847 }, 00:27:46.847 { 00:27:46.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:46.847 "dma_device_type": 2 00:27:46.847 } 00:27:46.847 ], 00:27:46.847 "driver_specific": {} 00:27:46.847 } 00:27:46.847 ] 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.847 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:46.847 "name": "Existed_Raid", 00:27:46.847 "uuid": "97409867-bfba-4257-abcd-3c20f79c99f6", 00:27:46.847 "strip_size_kb": 64, 00:27:46.847 "state": "online", 00:27:46.847 "raid_level": "concat", 00:27:46.847 "superblock": false, 00:27:46.847 "num_base_bdevs": 4, 00:27:46.847 "num_base_bdevs_discovered": 4, 00:27:46.847 "num_base_bdevs_operational": 4, 00:27:46.847 "base_bdevs_list": [ 00:27:46.847 { 00:27:46.847 "name": "NewBaseBdev", 00:27:46.847 "uuid": "a49d5026-08ad-4516-83d6-95eea58386ab", 00:27:46.847 "is_configured": true, 00:27:46.847 "data_offset": 0, 00:27:46.847 "data_size": 65536 00:27:46.847 }, 00:27:46.847 { 00:27:46.847 "name": "BaseBdev2", 00:27:46.847 "uuid": "bd0942a2-4b3b-4a81-a58f-79e20ba092e4", 00:27:46.847 "is_configured": true, 00:27:46.847 "data_offset": 0, 00:27:46.847 "data_size": 65536 00:27:46.847 }, 00:27:46.847 { 00:27:46.847 "name": "BaseBdev3", 00:27:46.847 "uuid": "6cd43be8-fc59-4b93-b69c-02be39645df8", 00:27:46.847 "is_configured": true, 00:27:46.847 "data_offset": 0, 00:27:46.847 "data_size": 65536 00:27:46.847 }, 00:27:46.847 { 00:27:46.847 "name": "BaseBdev4", 00:27:46.847 "uuid": "83e409bc-48f8-47a2-8da0-34c21a66e8de", 00:27:46.847 "is_configured": true, 00:27:46.847 "data_offset": 0, 00:27:46.847 "data_size": 65536 00:27:46.847 } 00:27:46.847 ] 00:27:46.847 }' 00:27:46.848 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:46.848 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.105 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:27:47.105 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:27:47.105 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:47.105 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:47.105 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:27:47.105 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:47.105 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:27:47.105 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:47.105 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.105 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.105 [2024-11-05 15:57:19.492606] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:47.105 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.105 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:47.105 "name": "Existed_Raid", 00:27:47.105 "aliases": [ 00:27:47.105 "97409867-bfba-4257-abcd-3c20f79c99f6" 00:27:47.105 ], 00:27:47.105 "product_name": "Raid Volume", 00:27:47.105 "block_size": 512, 00:27:47.105 "num_blocks": 262144, 00:27:47.105 "uuid": "97409867-bfba-4257-abcd-3c20f79c99f6", 00:27:47.105 "assigned_rate_limits": { 00:27:47.105 "rw_ios_per_sec": 0, 00:27:47.105 "rw_mbytes_per_sec": 0, 00:27:47.106 "r_mbytes_per_sec": 0, 00:27:47.106 "w_mbytes_per_sec": 0 00:27:47.106 }, 00:27:47.106 "claimed": false, 00:27:47.106 "zoned": false, 00:27:47.106 "supported_io_types": { 00:27:47.106 "read": true, 00:27:47.106 "write": true, 00:27:47.106 "unmap": true, 00:27:47.106 "flush": true, 00:27:47.106 "reset": true, 00:27:47.106 "nvme_admin": false, 00:27:47.106 "nvme_io": false, 00:27:47.106 "nvme_io_md": false, 00:27:47.106 "write_zeroes": true, 00:27:47.106 "zcopy": false, 00:27:47.106 "get_zone_info": false, 00:27:47.106 "zone_management": false, 00:27:47.106 "zone_append": false, 00:27:47.106 "compare": false, 00:27:47.106 "compare_and_write": false, 00:27:47.106 "abort": false, 00:27:47.106 "seek_hole": false, 00:27:47.106 "seek_data": false, 00:27:47.106 "copy": false, 00:27:47.106 "nvme_iov_md": false 00:27:47.106 }, 00:27:47.106 "memory_domains": [ 00:27:47.106 { 00:27:47.106 "dma_device_id": "system", 00:27:47.106 "dma_device_type": 1 00:27:47.106 }, 00:27:47.106 { 00:27:47.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:47.106 "dma_device_type": 2 00:27:47.106 }, 00:27:47.106 { 00:27:47.106 "dma_device_id": "system", 00:27:47.106 "dma_device_type": 1 00:27:47.106 }, 00:27:47.106 { 00:27:47.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:47.106 "dma_device_type": 2 00:27:47.106 }, 00:27:47.106 { 00:27:47.106 "dma_device_id": "system", 00:27:47.106 "dma_device_type": 1 00:27:47.106 }, 00:27:47.106 { 00:27:47.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:47.106 "dma_device_type": 2 00:27:47.106 }, 00:27:47.106 { 00:27:47.106 "dma_device_id": "system", 00:27:47.106 "dma_device_type": 1 00:27:47.106 }, 00:27:47.106 { 00:27:47.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:47.106 "dma_device_type": 2 00:27:47.106 } 00:27:47.106 ], 00:27:47.106 "driver_specific": { 00:27:47.106 "raid": { 00:27:47.106 "uuid": "97409867-bfba-4257-abcd-3c20f79c99f6", 00:27:47.106 "strip_size_kb": 64, 00:27:47.106 "state": "online", 00:27:47.106 "raid_level": "concat", 00:27:47.106 "superblock": false, 00:27:47.106 "num_base_bdevs": 4, 00:27:47.106 "num_base_bdevs_discovered": 4, 00:27:47.106 "num_base_bdevs_operational": 4, 00:27:47.106 "base_bdevs_list": [ 00:27:47.106 { 00:27:47.106 "name": "NewBaseBdev", 00:27:47.106 "uuid": "a49d5026-08ad-4516-83d6-95eea58386ab", 00:27:47.106 "is_configured": true, 00:27:47.106 "data_offset": 0, 00:27:47.106 "data_size": 65536 00:27:47.106 }, 00:27:47.106 { 00:27:47.106 "name": "BaseBdev2", 00:27:47.106 "uuid": "bd0942a2-4b3b-4a81-a58f-79e20ba092e4", 00:27:47.106 "is_configured": true, 00:27:47.106 "data_offset": 0, 00:27:47.106 "data_size": 65536 00:27:47.106 }, 00:27:47.106 { 00:27:47.106 "name": "BaseBdev3", 00:27:47.106 "uuid": "6cd43be8-fc59-4b93-b69c-02be39645df8", 00:27:47.106 "is_configured": true, 00:27:47.106 "data_offset": 0, 00:27:47.106 "data_size": 65536 00:27:47.106 }, 00:27:47.106 { 00:27:47.106 "name": "BaseBdev4", 00:27:47.106 "uuid": "83e409bc-48f8-47a2-8da0-34c21a66e8de", 00:27:47.106 "is_configured": true, 00:27:47.106 "data_offset": 0, 00:27:47.106 "data_size": 65536 00:27:47.106 } 00:27:47.106 ] 00:27:47.106 } 00:27:47.106 } 00:27:47.106 }' 00:27:47.106 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:47.365 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:27:47.365 BaseBdev2 00:27:47.365 BaseBdev3 00:27:47.365 BaseBdev4' 00:27:47.365 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:47.365 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:47.365 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:47.365 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:27:47.365 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:47.365 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.365 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.365 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.365 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:47.365 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:47.365 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:47.365 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:27:47.365 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.365 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.365 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:47.365 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.365 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:47.365 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:47.365 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:47.365 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:27:47.365 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.365 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.365 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:47.365 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.365 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:47.365 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:47.365 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:47.365 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:27:47.365 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.365 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.365 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:47.365 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.365 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:47.365 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:47.365 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:47.365 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.365 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.365 [2024-11-05 15:57:19.716342] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:47.365 [2024-11-05 15:57:19.716368] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:47.365 [2024-11-05 15:57:19.716421] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:47.366 [2024-11-05 15:57:19.716473] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:47.366 [2024-11-05 15:57:19.716481] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:27:47.366 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.366 15:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69249 00:27:47.366 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 69249 ']' 00:27:47.366 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 69249 00:27:47.366 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:27:47.366 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:47.366 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69249 00:27:47.366 killing process with pid 69249 00:27:47.366 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:47.366 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:47.366 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69249' 00:27:47.366 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 69249 00:27:47.366 [2024-11-05 15:57:19.744567] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:47.366 15:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 69249 00:27:47.624 [2024-11-05 15:57:19.935412] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:48.191 ************************************ 00:27:48.191 END TEST raid_state_function_test 00:27:48.191 ************************************ 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:27:48.191 00:27:48.191 real 0m7.796s 00:27:48.191 user 0m12.557s 00:27:48.191 sys 0m1.299s 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:48.191 15:57:20 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:27:48.191 15:57:20 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:27:48.191 15:57:20 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:48.191 15:57:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:48.191 ************************************ 00:27:48.191 START TEST raid_state_function_test_sb 00:27:48.191 ************************************ 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 true 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=69887 00:27:48.191 Process raid pid: 69887 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69887' 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 69887 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 69887 ']' 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:48.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:48.191 15:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:48.191 [2024-11-05 15:57:20.598465] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:27:48.191 [2024-11-05 15:57:20.598574] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:48.449 [2024-11-05 15:57:20.753429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.449 [2024-11-05 15:57:20.835206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:48.708 [2024-11-05 15:57:20.942753] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:48.708 [2024-11-05 15:57:20.942783] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:49.275 15:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:49.275 15:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:27:49.275 15:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:49.275 15:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.275 15:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:49.275 [2024-11-05 15:57:21.454722] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:49.275 [2024-11-05 15:57:21.454766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:49.275 [2024-11-05 15:57:21.454774] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:49.275 [2024-11-05 15:57:21.454782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:49.275 [2024-11-05 15:57:21.454787] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:49.275 [2024-11-05 15:57:21.454794] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:49.275 [2024-11-05 15:57:21.454799] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:49.275 [2024-11-05 15:57:21.454806] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:49.275 15:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.275 15:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:49.275 15:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:49.275 15:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:49.275 15:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:49.275 15:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:49.275 15:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:49.275 15:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:49.275 15:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:49.275 15:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:49.275 15:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:49.275 15:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:49.275 15:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.275 15:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:49.275 15:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:49.275 15:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.275 15:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:49.275 "name": "Existed_Raid", 00:27:49.275 "uuid": "5eb0084c-128d-4162-9a3d-3f66839481d2", 00:27:49.275 "strip_size_kb": 64, 00:27:49.275 "state": "configuring", 00:27:49.275 "raid_level": "concat", 00:27:49.275 "superblock": true, 00:27:49.275 "num_base_bdevs": 4, 00:27:49.275 "num_base_bdevs_discovered": 0, 00:27:49.275 "num_base_bdevs_operational": 4, 00:27:49.275 "base_bdevs_list": [ 00:27:49.275 { 00:27:49.275 "name": "BaseBdev1", 00:27:49.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:49.275 "is_configured": false, 00:27:49.275 "data_offset": 0, 00:27:49.275 "data_size": 0 00:27:49.275 }, 00:27:49.275 { 00:27:49.275 "name": "BaseBdev2", 00:27:49.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:49.275 "is_configured": false, 00:27:49.275 "data_offset": 0, 00:27:49.275 "data_size": 0 00:27:49.275 }, 00:27:49.275 { 00:27:49.275 "name": "BaseBdev3", 00:27:49.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:49.275 "is_configured": false, 00:27:49.275 "data_offset": 0, 00:27:49.275 "data_size": 0 00:27:49.275 }, 00:27:49.275 { 00:27:49.275 "name": "BaseBdev4", 00:27:49.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:49.275 "is_configured": false, 00:27:49.275 "data_offset": 0, 00:27:49.275 "data_size": 0 00:27:49.275 } 00:27:49.275 ] 00:27:49.275 }' 00:27:49.275 15:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:49.275 15:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:49.534 15:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:49.534 15:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.534 15:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:49.534 [2024-11-05 15:57:21.774731] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:49.534 [2024-11-05 15:57:21.774765] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:27:49.534 15:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.534 15:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:49.534 15:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.534 15:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:49.534 [2024-11-05 15:57:21.782740] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:49.534 [2024-11-05 15:57:21.782770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:49.534 [2024-11-05 15:57:21.782777] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:49.534 [2024-11-05 15:57:21.782784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:49.534 [2024-11-05 15:57:21.782789] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:49.534 [2024-11-05 15:57:21.782797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:49.534 [2024-11-05 15:57:21.782802] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:49.534 [2024-11-05 15:57:21.782808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:49.534 15:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.534 15:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:27:49.534 15:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.534 15:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:49.534 [2024-11-05 15:57:21.810603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:49.534 BaseBdev1 00:27:49.534 15:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.534 15:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:27:49.534 15:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:27:49.534 15:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:49.534 15:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:27:49.534 15:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:49.534 15:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:49.534 15:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:49.534 15:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.534 15:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:49.534 15:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.534 15:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:49.534 15:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.534 15:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:49.534 [ 00:27:49.534 { 00:27:49.534 "name": "BaseBdev1", 00:27:49.534 "aliases": [ 00:27:49.534 "ab96fa3f-67fb-42cf-abc9-c2c241f6fc63" 00:27:49.534 ], 00:27:49.534 "product_name": "Malloc disk", 00:27:49.534 "block_size": 512, 00:27:49.534 "num_blocks": 65536, 00:27:49.534 "uuid": "ab96fa3f-67fb-42cf-abc9-c2c241f6fc63", 00:27:49.534 "assigned_rate_limits": { 00:27:49.534 "rw_ios_per_sec": 0, 00:27:49.534 "rw_mbytes_per_sec": 0, 00:27:49.534 "r_mbytes_per_sec": 0, 00:27:49.534 "w_mbytes_per_sec": 0 00:27:49.534 }, 00:27:49.534 "claimed": true, 00:27:49.534 "claim_type": "exclusive_write", 00:27:49.534 "zoned": false, 00:27:49.534 "supported_io_types": { 00:27:49.534 "read": true, 00:27:49.534 "write": true, 00:27:49.534 "unmap": true, 00:27:49.534 "flush": true, 00:27:49.534 "reset": true, 00:27:49.534 "nvme_admin": false, 00:27:49.534 "nvme_io": false, 00:27:49.534 "nvme_io_md": false, 00:27:49.534 "write_zeroes": true, 00:27:49.534 "zcopy": true, 00:27:49.534 "get_zone_info": false, 00:27:49.534 "zone_management": false, 00:27:49.534 "zone_append": false, 00:27:49.534 "compare": false, 00:27:49.534 "compare_and_write": false, 00:27:49.534 "abort": true, 00:27:49.534 "seek_hole": false, 00:27:49.534 "seek_data": false, 00:27:49.534 "copy": true, 00:27:49.534 "nvme_iov_md": false 00:27:49.534 }, 00:27:49.534 "memory_domains": [ 00:27:49.534 { 00:27:49.534 "dma_device_id": "system", 00:27:49.534 "dma_device_type": 1 00:27:49.534 }, 00:27:49.534 { 00:27:49.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:49.534 "dma_device_type": 2 00:27:49.534 } 00:27:49.534 ], 00:27:49.534 "driver_specific": {} 00:27:49.534 } 00:27:49.534 ] 00:27:49.534 15:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.534 15:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:27:49.535 15:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:49.535 15:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:49.535 15:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:49.535 15:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:49.535 15:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:49.535 15:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:49.535 15:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:49.535 15:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:49.535 15:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:49.535 15:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:49.535 15:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:49.535 15:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.535 15:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:49.535 15:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:49.535 15:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.535 15:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:49.535 "name": "Existed_Raid", 00:27:49.535 "uuid": "d80ddb58-2847-408a-bbc4-43ee56edf3da", 00:27:49.535 "strip_size_kb": 64, 00:27:49.535 "state": "configuring", 00:27:49.535 "raid_level": "concat", 00:27:49.535 "superblock": true, 00:27:49.535 "num_base_bdevs": 4, 00:27:49.535 "num_base_bdevs_discovered": 1, 00:27:49.535 "num_base_bdevs_operational": 4, 00:27:49.535 "base_bdevs_list": [ 00:27:49.535 { 00:27:49.535 "name": "BaseBdev1", 00:27:49.535 "uuid": "ab96fa3f-67fb-42cf-abc9-c2c241f6fc63", 00:27:49.535 "is_configured": true, 00:27:49.535 "data_offset": 2048, 00:27:49.535 "data_size": 63488 00:27:49.535 }, 00:27:49.535 { 00:27:49.535 "name": "BaseBdev2", 00:27:49.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:49.535 "is_configured": false, 00:27:49.535 "data_offset": 0, 00:27:49.535 "data_size": 0 00:27:49.535 }, 00:27:49.535 { 00:27:49.535 "name": "BaseBdev3", 00:27:49.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:49.535 "is_configured": false, 00:27:49.535 "data_offset": 0, 00:27:49.535 "data_size": 0 00:27:49.535 }, 00:27:49.535 { 00:27:49.535 "name": "BaseBdev4", 00:27:49.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:49.535 "is_configured": false, 00:27:49.535 "data_offset": 0, 00:27:49.535 "data_size": 0 00:27:49.535 } 00:27:49.535 ] 00:27:49.535 }' 00:27:49.535 15:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:49.535 15:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:49.793 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:49.793 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.793 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:49.793 [2024-11-05 15:57:22.194710] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:49.793 [2024-11-05 15:57:22.194751] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:27:49.794 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.794 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:49.794 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.794 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:49.794 [2024-11-05 15:57:22.202753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:49.794 [2024-11-05 15:57:22.204255] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:49.794 [2024-11-05 15:57:22.204288] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:49.794 [2024-11-05 15:57:22.204296] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:49.794 [2024-11-05 15:57:22.204305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:49.794 [2024-11-05 15:57:22.204310] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:49.794 [2024-11-05 15:57:22.204316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:49.794 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.794 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:27:49.794 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:49.794 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:49.794 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:49.794 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:49.794 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:49.794 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:49.794 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:49.794 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:49.794 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:49.794 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:49.794 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:50.052 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:50.052 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:50.052 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.052 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:50.052 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.052 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:50.052 "name": "Existed_Raid", 00:27:50.052 "uuid": "76906b7f-1faa-406d-9128-e755c3ef42da", 00:27:50.052 "strip_size_kb": 64, 00:27:50.052 "state": "configuring", 00:27:50.052 "raid_level": "concat", 00:27:50.052 "superblock": true, 00:27:50.052 "num_base_bdevs": 4, 00:27:50.052 "num_base_bdevs_discovered": 1, 00:27:50.052 "num_base_bdevs_operational": 4, 00:27:50.052 "base_bdevs_list": [ 00:27:50.052 { 00:27:50.052 "name": "BaseBdev1", 00:27:50.052 "uuid": "ab96fa3f-67fb-42cf-abc9-c2c241f6fc63", 00:27:50.052 "is_configured": true, 00:27:50.052 "data_offset": 2048, 00:27:50.052 "data_size": 63488 00:27:50.052 }, 00:27:50.052 { 00:27:50.052 "name": "BaseBdev2", 00:27:50.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:50.052 "is_configured": false, 00:27:50.052 "data_offset": 0, 00:27:50.052 "data_size": 0 00:27:50.052 }, 00:27:50.052 { 00:27:50.052 "name": "BaseBdev3", 00:27:50.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:50.052 "is_configured": false, 00:27:50.052 "data_offset": 0, 00:27:50.052 "data_size": 0 00:27:50.052 }, 00:27:50.052 { 00:27:50.052 "name": "BaseBdev4", 00:27:50.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:50.052 "is_configured": false, 00:27:50.052 "data_offset": 0, 00:27:50.052 "data_size": 0 00:27:50.052 } 00:27:50.052 ] 00:27:50.052 }' 00:27:50.052 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:50.052 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:50.323 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:27:50.323 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.323 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:50.323 [2024-11-05 15:57:22.553075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:50.323 BaseBdev2 00:27:50.323 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.323 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:27:50.323 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:27:50.323 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:50.323 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:27:50.323 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:50.323 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:50.323 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:50.323 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.323 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:50.323 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.323 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:50.323 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.323 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:50.323 [ 00:27:50.323 { 00:27:50.323 "name": "BaseBdev2", 00:27:50.323 "aliases": [ 00:27:50.323 "7d8d7248-f8f0-4e16-bbab-7e2c227f181a" 00:27:50.323 ], 00:27:50.323 "product_name": "Malloc disk", 00:27:50.323 "block_size": 512, 00:27:50.323 "num_blocks": 65536, 00:27:50.323 "uuid": "7d8d7248-f8f0-4e16-bbab-7e2c227f181a", 00:27:50.323 "assigned_rate_limits": { 00:27:50.323 "rw_ios_per_sec": 0, 00:27:50.323 "rw_mbytes_per_sec": 0, 00:27:50.323 "r_mbytes_per_sec": 0, 00:27:50.323 "w_mbytes_per_sec": 0 00:27:50.323 }, 00:27:50.323 "claimed": true, 00:27:50.323 "claim_type": "exclusive_write", 00:27:50.323 "zoned": false, 00:27:50.323 "supported_io_types": { 00:27:50.323 "read": true, 00:27:50.323 "write": true, 00:27:50.323 "unmap": true, 00:27:50.323 "flush": true, 00:27:50.323 "reset": true, 00:27:50.323 "nvme_admin": false, 00:27:50.323 "nvme_io": false, 00:27:50.323 "nvme_io_md": false, 00:27:50.323 "write_zeroes": true, 00:27:50.323 "zcopy": true, 00:27:50.323 "get_zone_info": false, 00:27:50.323 "zone_management": false, 00:27:50.323 "zone_append": false, 00:27:50.323 "compare": false, 00:27:50.323 "compare_and_write": false, 00:27:50.323 "abort": true, 00:27:50.323 "seek_hole": false, 00:27:50.324 "seek_data": false, 00:27:50.324 "copy": true, 00:27:50.324 "nvme_iov_md": false 00:27:50.324 }, 00:27:50.324 "memory_domains": [ 00:27:50.324 { 00:27:50.324 "dma_device_id": "system", 00:27:50.324 "dma_device_type": 1 00:27:50.324 }, 00:27:50.324 { 00:27:50.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:50.324 "dma_device_type": 2 00:27:50.324 } 00:27:50.324 ], 00:27:50.324 "driver_specific": {} 00:27:50.324 } 00:27:50.324 ] 00:27:50.324 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.324 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:27:50.324 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:50.324 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:50.324 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:50.324 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:50.324 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:50.324 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:50.324 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:50.324 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:50.324 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:50.324 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:50.324 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:50.324 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:50.324 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:50.324 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.324 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:50.324 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:50.324 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.324 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:50.324 "name": "Existed_Raid", 00:27:50.324 "uuid": "76906b7f-1faa-406d-9128-e755c3ef42da", 00:27:50.324 "strip_size_kb": 64, 00:27:50.324 "state": "configuring", 00:27:50.324 "raid_level": "concat", 00:27:50.324 "superblock": true, 00:27:50.324 "num_base_bdevs": 4, 00:27:50.324 "num_base_bdevs_discovered": 2, 00:27:50.324 "num_base_bdevs_operational": 4, 00:27:50.324 "base_bdevs_list": [ 00:27:50.324 { 00:27:50.324 "name": "BaseBdev1", 00:27:50.324 "uuid": "ab96fa3f-67fb-42cf-abc9-c2c241f6fc63", 00:27:50.324 "is_configured": true, 00:27:50.324 "data_offset": 2048, 00:27:50.324 "data_size": 63488 00:27:50.324 }, 00:27:50.324 { 00:27:50.324 "name": "BaseBdev2", 00:27:50.324 "uuid": "7d8d7248-f8f0-4e16-bbab-7e2c227f181a", 00:27:50.324 "is_configured": true, 00:27:50.324 "data_offset": 2048, 00:27:50.324 "data_size": 63488 00:27:50.324 }, 00:27:50.324 { 00:27:50.324 "name": "BaseBdev3", 00:27:50.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:50.324 "is_configured": false, 00:27:50.324 "data_offset": 0, 00:27:50.324 "data_size": 0 00:27:50.324 }, 00:27:50.324 { 00:27:50.324 "name": "BaseBdev4", 00:27:50.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:50.324 "is_configured": false, 00:27:50.324 "data_offset": 0, 00:27:50.324 "data_size": 0 00:27:50.324 } 00:27:50.324 ] 00:27:50.324 }' 00:27:50.324 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:50.324 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:50.582 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:27:50.582 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.582 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:50.582 [2024-11-05 15:57:22.935369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:50.582 BaseBdev3 00:27:50.582 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.582 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:27:50.582 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:27:50.582 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:50.582 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:27:50.582 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:50.582 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:50.582 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:50.582 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.582 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:50.582 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.582 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:50.582 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.582 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:50.582 [ 00:27:50.582 { 00:27:50.582 "name": "BaseBdev3", 00:27:50.582 "aliases": [ 00:27:50.582 "172bf6d0-0359-49f0-84ee-18c3907c9cc8" 00:27:50.582 ], 00:27:50.582 "product_name": "Malloc disk", 00:27:50.582 "block_size": 512, 00:27:50.582 "num_blocks": 65536, 00:27:50.582 "uuid": "172bf6d0-0359-49f0-84ee-18c3907c9cc8", 00:27:50.582 "assigned_rate_limits": { 00:27:50.582 "rw_ios_per_sec": 0, 00:27:50.582 "rw_mbytes_per_sec": 0, 00:27:50.582 "r_mbytes_per_sec": 0, 00:27:50.582 "w_mbytes_per_sec": 0 00:27:50.582 }, 00:27:50.582 "claimed": true, 00:27:50.582 "claim_type": "exclusive_write", 00:27:50.582 "zoned": false, 00:27:50.582 "supported_io_types": { 00:27:50.582 "read": true, 00:27:50.582 "write": true, 00:27:50.582 "unmap": true, 00:27:50.582 "flush": true, 00:27:50.582 "reset": true, 00:27:50.582 "nvme_admin": false, 00:27:50.582 "nvme_io": false, 00:27:50.582 "nvme_io_md": false, 00:27:50.582 "write_zeroes": true, 00:27:50.582 "zcopy": true, 00:27:50.582 "get_zone_info": false, 00:27:50.582 "zone_management": false, 00:27:50.582 "zone_append": false, 00:27:50.582 "compare": false, 00:27:50.582 "compare_and_write": false, 00:27:50.582 "abort": true, 00:27:50.582 "seek_hole": false, 00:27:50.582 "seek_data": false, 00:27:50.582 "copy": true, 00:27:50.582 "nvme_iov_md": false 00:27:50.582 }, 00:27:50.582 "memory_domains": [ 00:27:50.582 { 00:27:50.582 "dma_device_id": "system", 00:27:50.582 "dma_device_type": 1 00:27:50.582 }, 00:27:50.582 { 00:27:50.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:50.582 "dma_device_type": 2 00:27:50.582 } 00:27:50.582 ], 00:27:50.582 "driver_specific": {} 00:27:50.582 } 00:27:50.582 ] 00:27:50.582 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.583 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:27:50.583 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:50.583 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:50.583 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:50.583 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:50.583 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:50.583 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:50.583 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:50.583 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:50.583 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:50.583 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:50.583 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:50.583 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:50.583 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:50.583 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.583 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:50.583 15:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:50.583 15:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.840 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:50.840 "name": "Existed_Raid", 00:27:50.840 "uuid": "76906b7f-1faa-406d-9128-e755c3ef42da", 00:27:50.840 "strip_size_kb": 64, 00:27:50.840 "state": "configuring", 00:27:50.840 "raid_level": "concat", 00:27:50.840 "superblock": true, 00:27:50.840 "num_base_bdevs": 4, 00:27:50.840 "num_base_bdevs_discovered": 3, 00:27:50.840 "num_base_bdevs_operational": 4, 00:27:50.840 "base_bdevs_list": [ 00:27:50.840 { 00:27:50.840 "name": "BaseBdev1", 00:27:50.840 "uuid": "ab96fa3f-67fb-42cf-abc9-c2c241f6fc63", 00:27:50.840 "is_configured": true, 00:27:50.840 "data_offset": 2048, 00:27:50.840 "data_size": 63488 00:27:50.840 }, 00:27:50.840 { 00:27:50.840 "name": "BaseBdev2", 00:27:50.840 "uuid": "7d8d7248-f8f0-4e16-bbab-7e2c227f181a", 00:27:50.840 "is_configured": true, 00:27:50.840 "data_offset": 2048, 00:27:50.840 "data_size": 63488 00:27:50.840 }, 00:27:50.840 { 00:27:50.840 "name": "BaseBdev3", 00:27:50.840 "uuid": "172bf6d0-0359-49f0-84ee-18c3907c9cc8", 00:27:50.840 "is_configured": true, 00:27:50.840 "data_offset": 2048, 00:27:50.840 "data_size": 63488 00:27:50.840 }, 00:27:50.840 { 00:27:50.840 "name": "BaseBdev4", 00:27:50.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:50.840 "is_configured": false, 00:27:50.840 "data_offset": 0, 00:27:50.840 "data_size": 0 00:27:50.840 } 00:27:50.840 ] 00:27:50.840 }' 00:27:50.840 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:50.840 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:51.098 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:27:51.098 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.098 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:51.098 [2024-11-05 15:57:23.297503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:51.098 [2024-11-05 15:57:23.297689] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:51.098 [2024-11-05 15:57:23.297700] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:51.098 [2024-11-05 15:57:23.297929] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:51.098 [2024-11-05 15:57:23.298044] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:51.098 [2024-11-05 15:57:23.298054] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:27:51.098 BaseBdev4 00:27:51.098 [2024-11-05 15:57:23.298155] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:51.098 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.098 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:27:51.098 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:27:51.098 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:51.098 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:27:51.098 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:51.098 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:51.098 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:51.098 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.098 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:51.098 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.098 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:51.098 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.098 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:51.098 [ 00:27:51.098 { 00:27:51.098 "name": "BaseBdev4", 00:27:51.098 "aliases": [ 00:27:51.098 "9aeb4e98-fe56-4654-8b80-8d07dc053d87" 00:27:51.098 ], 00:27:51.098 "product_name": "Malloc disk", 00:27:51.098 "block_size": 512, 00:27:51.098 "num_blocks": 65536, 00:27:51.098 "uuid": "9aeb4e98-fe56-4654-8b80-8d07dc053d87", 00:27:51.098 "assigned_rate_limits": { 00:27:51.098 "rw_ios_per_sec": 0, 00:27:51.098 "rw_mbytes_per_sec": 0, 00:27:51.098 "r_mbytes_per_sec": 0, 00:27:51.098 "w_mbytes_per_sec": 0 00:27:51.098 }, 00:27:51.098 "claimed": true, 00:27:51.098 "claim_type": "exclusive_write", 00:27:51.098 "zoned": false, 00:27:51.098 "supported_io_types": { 00:27:51.098 "read": true, 00:27:51.098 "write": true, 00:27:51.098 "unmap": true, 00:27:51.098 "flush": true, 00:27:51.098 "reset": true, 00:27:51.098 "nvme_admin": false, 00:27:51.098 "nvme_io": false, 00:27:51.098 "nvme_io_md": false, 00:27:51.098 "write_zeroes": true, 00:27:51.098 "zcopy": true, 00:27:51.098 "get_zone_info": false, 00:27:51.098 "zone_management": false, 00:27:51.098 "zone_append": false, 00:27:51.098 "compare": false, 00:27:51.098 "compare_and_write": false, 00:27:51.098 "abort": true, 00:27:51.098 "seek_hole": false, 00:27:51.098 "seek_data": false, 00:27:51.098 "copy": true, 00:27:51.098 "nvme_iov_md": false 00:27:51.098 }, 00:27:51.098 "memory_domains": [ 00:27:51.098 { 00:27:51.098 "dma_device_id": "system", 00:27:51.098 "dma_device_type": 1 00:27:51.098 }, 00:27:51.098 { 00:27:51.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:51.098 "dma_device_type": 2 00:27:51.098 } 00:27:51.098 ], 00:27:51.098 "driver_specific": {} 00:27:51.098 } 00:27:51.098 ] 00:27:51.098 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.098 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:27:51.098 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:51.098 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:51.098 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:27:51.098 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:51.098 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:51.098 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:51.098 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:51.099 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:51.099 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:51.099 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:51.099 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:51.099 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:51.099 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:51.099 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:51.099 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.099 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:51.099 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.099 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:51.099 "name": "Existed_Raid", 00:27:51.099 "uuid": "76906b7f-1faa-406d-9128-e755c3ef42da", 00:27:51.099 "strip_size_kb": 64, 00:27:51.099 "state": "online", 00:27:51.099 "raid_level": "concat", 00:27:51.099 "superblock": true, 00:27:51.099 "num_base_bdevs": 4, 00:27:51.099 "num_base_bdevs_discovered": 4, 00:27:51.099 "num_base_bdevs_operational": 4, 00:27:51.099 "base_bdevs_list": [ 00:27:51.099 { 00:27:51.099 "name": "BaseBdev1", 00:27:51.099 "uuid": "ab96fa3f-67fb-42cf-abc9-c2c241f6fc63", 00:27:51.099 "is_configured": true, 00:27:51.099 "data_offset": 2048, 00:27:51.099 "data_size": 63488 00:27:51.099 }, 00:27:51.099 { 00:27:51.099 "name": "BaseBdev2", 00:27:51.099 "uuid": "7d8d7248-f8f0-4e16-bbab-7e2c227f181a", 00:27:51.099 "is_configured": true, 00:27:51.099 "data_offset": 2048, 00:27:51.099 "data_size": 63488 00:27:51.099 }, 00:27:51.099 { 00:27:51.099 "name": "BaseBdev3", 00:27:51.099 "uuid": "172bf6d0-0359-49f0-84ee-18c3907c9cc8", 00:27:51.099 "is_configured": true, 00:27:51.099 "data_offset": 2048, 00:27:51.099 "data_size": 63488 00:27:51.099 }, 00:27:51.099 { 00:27:51.099 "name": "BaseBdev4", 00:27:51.099 "uuid": "9aeb4e98-fe56-4654-8b80-8d07dc053d87", 00:27:51.099 "is_configured": true, 00:27:51.099 "data_offset": 2048, 00:27:51.099 "data_size": 63488 00:27:51.099 } 00:27:51.099 ] 00:27:51.099 }' 00:27:51.099 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:51.099 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:51.356 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:27:51.356 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:27:51.356 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:51.356 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:51.356 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:27:51.356 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:51.356 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:27:51.356 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.356 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:51.356 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:51.356 [2024-11-05 15:57:23.645928] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:51.356 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.356 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:51.356 "name": "Existed_Raid", 00:27:51.356 "aliases": [ 00:27:51.356 "76906b7f-1faa-406d-9128-e755c3ef42da" 00:27:51.356 ], 00:27:51.356 "product_name": "Raid Volume", 00:27:51.356 "block_size": 512, 00:27:51.356 "num_blocks": 253952, 00:27:51.356 "uuid": "76906b7f-1faa-406d-9128-e755c3ef42da", 00:27:51.356 "assigned_rate_limits": { 00:27:51.356 "rw_ios_per_sec": 0, 00:27:51.356 "rw_mbytes_per_sec": 0, 00:27:51.356 "r_mbytes_per_sec": 0, 00:27:51.356 "w_mbytes_per_sec": 0 00:27:51.356 }, 00:27:51.356 "claimed": false, 00:27:51.356 "zoned": false, 00:27:51.356 "supported_io_types": { 00:27:51.356 "read": true, 00:27:51.356 "write": true, 00:27:51.356 "unmap": true, 00:27:51.356 "flush": true, 00:27:51.356 "reset": true, 00:27:51.356 "nvme_admin": false, 00:27:51.356 "nvme_io": false, 00:27:51.356 "nvme_io_md": false, 00:27:51.356 "write_zeroes": true, 00:27:51.356 "zcopy": false, 00:27:51.356 "get_zone_info": false, 00:27:51.356 "zone_management": false, 00:27:51.356 "zone_append": false, 00:27:51.356 "compare": false, 00:27:51.356 "compare_and_write": false, 00:27:51.356 "abort": false, 00:27:51.356 "seek_hole": false, 00:27:51.356 "seek_data": false, 00:27:51.356 "copy": false, 00:27:51.356 "nvme_iov_md": false 00:27:51.356 }, 00:27:51.356 "memory_domains": [ 00:27:51.356 { 00:27:51.356 "dma_device_id": "system", 00:27:51.356 "dma_device_type": 1 00:27:51.356 }, 00:27:51.356 { 00:27:51.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:51.356 "dma_device_type": 2 00:27:51.356 }, 00:27:51.356 { 00:27:51.356 "dma_device_id": "system", 00:27:51.356 "dma_device_type": 1 00:27:51.356 }, 00:27:51.356 { 00:27:51.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:51.356 "dma_device_type": 2 00:27:51.356 }, 00:27:51.356 { 00:27:51.356 "dma_device_id": "system", 00:27:51.356 "dma_device_type": 1 00:27:51.356 }, 00:27:51.356 { 00:27:51.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:51.356 "dma_device_type": 2 00:27:51.356 }, 00:27:51.356 { 00:27:51.356 "dma_device_id": "system", 00:27:51.356 "dma_device_type": 1 00:27:51.356 }, 00:27:51.356 { 00:27:51.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:51.356 "dma_device_type": 2 00:27:51.356 } 00:27:51.356 ], 00:27:51.357 "driver_specific": { 00:27:51.357 "raid": { 00:27:51.357 "uuid": "76906b7f-1faa-406d-9128-e755c3ef42da", 00:27:51.357 "strip_size_kb": 64, 00:27:51.357 "state": "online", 00:27:51.357 "raid_level": "concat", 00:27:51.357 "superblock": true, 00:27:51.357 "num_base_bdevs": 4, 00:27:51.357 "num_base_bdevs_discovered": 4, 00:27:51.357 "num_base_bdevs_operational": 4, 00:27:51.357 "base_bdevs_list": [ 00:27:51.357 { 00:27:51.357 "name": "BaseBdev1", 00:27:51.357 "uuid": "ab96fa3f-67fb-42cf-abc9-c2c241f6fc63", 00:27:51.357 "is_configured": true, 00:27:51.357 "data_offset": 2048, 00:27:51.357 "data_size": 63488 00:27:51.357 }, 00:27:51.357 { 00:27:51.357 "name": "BaseBdev2", 00:27:51.357 "uuid": "7d8d7248-f8f0-4e16-bbab-7e2c227f181a", 00:27:51.357 "is_configured": true, 00:27:51.357 "data_offset": 2048, 00:27:51.357 "data_size": 63488 00:27:51.357 }, 00:27:51.357 { 00:27:51.357 "name": "BaseBdev3", 00:27:51.357 "uuid": "172bf6d0-0359-49f0-84ee-18c3907c9cc8", 00:27:51.357 "is_configured": true, 00:27:51.357 "data_offset": 2048, 00:27:51.357 "data_size": 63488 00:27:51.357 }, 00:27:51.357 { 00:27:51.357 "name": "BaseBdev4", 00:27:51.357 "uuid": "9aeb4e98-fe56-4654-8b80-8d07dc053d87", 00:27:51.357 "is_configured": true, 00:27:51.357 "data_offset": 2048, 00:27:51.357 "data_size": 63488 00:27:51.357 } 00:27:51.357 ] 00:27:51.357 } 00:27:51.357 } 00:27:51.357 }' 00:27:51.357 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:51.357 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:27:51.357 BaseBdev2 00:27:51.357 BaseBdev3 00:27:51.357 BaseBdev4' 00:27:51.357 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:51.357 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:51.357 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:51.357 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:27:51.357 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.357 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:51.357 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:51.357 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.357 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:51.357 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:51.357 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:51.357 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:27:51.357 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.357 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:51.357 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:51.614 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.614 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:51.614 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:51.614 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:51.614 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:27:51.614 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.614 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:51.614 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:51.614 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.614 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:51.614 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:51.614 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:51.614 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:51.614 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:27:51.614 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.614 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:51.614 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.614 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:51.614 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:51.614 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:27:51.614 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.614 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:51.614 [2024-11-05 15:57:23.861699] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:51.614 [2024-11-05 15:57:23.861729] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:51.614 [2024-11-05 15:57:23.861768] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:51.614 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.614 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:27:51.615 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:27:51.615 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:51.615 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:27:51.615 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:27:51.615 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:27:51.615 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:51.615 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:27:51.615 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:51.615 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:51.615 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:51.615 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:51.615 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:51.615 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:51.615 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:51.615 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:51.615 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.615 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:51.615 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:51.615 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.615 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:51.615 "name": "Existed_Raid", 00:27:51.615 "uuid": "76906b7f-1faa-406d-9128-e755c3ef42da", 00:27:51.615 "strip_size_kb": 64, 00:27:51.615 "state": "offline", 00:27:51.615 "raid_level": "concat", 00:27:51.615 "superblock": true, 00:27:51.615 "num_base_bdevs": 4, 00:27:51.615 "num_base_bdevs_discovered": 3, 00:27:51.615 "num_base_bdevs_operational": 3, 00:27:51.615 "base_bdevs_list": [ 00:27:51.615 { 00:27:51.615 "name": null, 00:27:51.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:51.615 "is_configured": false, 00:27:51.615 "data_offset": 0, 00:27:51.615 "data_size": 63488 00:27:51.615 }, 00:27:51.615 { 00:27:51.615 "name": "BaseBdev2", 00:27:51.615 "uuid": "7d8d7248-f8f0-4e16-bbab-7e2c227f181a", 00:27:51.615 "is_configured": true, 00:27:51.615 "data_offset": 2048, 00:27:51.615 "data_size": 63488 00:27:51.615 }, 00:27:51.615 { 00:27:51.615 "name": "BaseBdev3", 00:27:51.615 "uuid": "172bf6d0-0359-49f0-84ee-18c3907c9cc8", 00:27:51.615 "is_configured": true, 00:27:51.615 "data_offset": 2048, 00:27:51.615 "data_size": 63488 00:27:51.615 }, 00:27:51.615 { 00:27:51.615 "name": "BaseBdev4", 00:27:51.615 "uuid": "9aeb4e98-fe56-4654-8b80-8d07dc053d87", 00:27:51.615 "is_configured": true, 00:27:51.615 "data_offset": 2048, 00:27:51.615 "data_size": 63488 00:27:51.615 } 00:27:51.615 ] 00:27:51.615 }' 00:27:51.615 15:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:51.615 15:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:51.873 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:27:51.873 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:51.873 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:51.873 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.873 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:51.873 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:51.873 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.873 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:51.873 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:51.873 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:27:51.873 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.873 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:51.873 [2024-11-05 15:57:24.247711] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:52.131 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.131 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:52.131 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:52.131 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:52.131 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.131 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:52.131 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:52.131 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.131 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:52.131 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:52.131 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:27:52.131 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.131 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:52.131 [2024-11-05 15:57:24.334692] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:52.132 [2024-11-05 15:57:24.416571] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:27:52.132 [2024-11-05 15:57:24.416685] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:52.132 BaseBdev2 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.132 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:52.391 [ 00:27:52.391 { 00:27:52.391 "name": "BaseBdev2", 00:27:52.391 "aliases": [ 00:27:52.391 "fc53a972-c93a-4532-84c6-f38a252a5755" 00:27:52.391 ], 00:27:52.391 "product_name": "Malloc disk", 00:27:52.391 "block_size": 512, 00:27:52.391 "num_blocks": 65536, 00:27:52.391 "uuid": "fc53a972-c93a-4532-84c6-f38a252a5755", 00:27:52.391 "assigned_rate_limits": { 00:27:52.391 "rw_ios_per_sec": 0, 00:27:52.391 "rw_mbytes_per_sec": 0, 00:27:52.391 "r_mbytes_per_sec": 0, 00:27:52.391 "w_mbytes_per_sec": 0 00:27:52.391 }, 00:27:52.391 "claimed": false, 00:27:52.391 "zoned": false, 00:27:52.391 "supported_io_types": { 00:27:52.391 "read": true, 00:27:52.391 "write": true, 00:27:52.391 "unmap": true, 00:27:52.391 "flush": true, 00:27:52.391 "reset": true, 00:27:52.391 "nvme_admin": false, 00:27:52.391 "nvme_io": false, 00:27:52.391 "nvme_io_md": false, 00:27:52.391 "write_zeroes": true, 00:27:52.391 "zcopy": true, 00:27:52.391 "get_zone_info": false, 00:27:52.391 "zone_management": false, 00:27:52.391 "zone_append": false, 00:27:52.391 "compare": false, 00:27:52.391 "compare_and_write": false, 00:27:52.391 "abort": true, 00:27:52.391 "seek_hole": false, 00:27:52.391 "seek_data": false, 00:27:52.391 "copy": true, 00:27:52.391 "nvme_iov_md": false 00:27:52.391 }, 00:27:52.391 "memory_domains": [ 00:27:52.391 { 00:27:52.391 "dma_device_id": "system", 00:27:52.391 "dma_device_type": 1 00:27:52.391 }, 00:27:52.391 { 00:27:52.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:52.391 "dma_device_type": 2 00:27:52.391 } 00:27:52.391 ], 00:27:52.391 "driver_specific": {} 00:27:52.391 } 00:27:52.391 ] 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:52.391 BaseBdev3 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:52.391 [ 00:27:52.391 { 00:27:52.391 "name": "BaseBdev3", 00:27:52.391 "aliases": [ 00:27:52.391 "7d227993-cffc-4ba0-9610-52bfb7500eb3" 00:27:52.391 ], 00:27:52.391 "product_name": "Malloc disk", 00:27:52.391 "block_size": 512, 00:27:52.391 "num_blocks": 65536, 00:27:52.391 "uuid": "7d227993-cffc-4ba0-9610-52bfb7500eb3", 00:27:52.391 "assigned_rate_limits": { 00:27:52.391 "rw_ios_per_sec": 0, 00:27:52.391 "rw_mbytes_per_sec": 0, 00:27:52.391 "r_mbytes_per_sec": 0, 00:27:52.391 "w_mbytes_per_sec": 0 00:27:52.391 }, 00:27:52.391 "claimed": false, 00:27:52.391 "zoned": false, 00:27:52.391 "supported_io_types": { 00:27:52.391 "read": true, 00:27:52.391 "write": true, 00:27:52.391 "unmap": true, 00:27:52.391 "flush": true, 00:27:52.391 "reset": true, 00:27:52.391 "nvme_admin": false, 00:27:52.391 "nvme_io": false, 00:27:52.391 "nvme_io_md": false, 00:27:52.391 "write_zeroes": true, 00:27:52.391 "zcopy": true, 00:27:52.391 "get_zone_info": false, 00:27:52.391 "zone_management": false, 00:27:52.391 "zone_append": false, 00:27:52.391 "compare": false, 00:27:52.391 "compare_and_write": false, 00:27:52.391 "abort": true, 00:27:52.391 "seek_hole": false, 00:27:52.391 "seek_data": false, 00:27:52.391 "copy": true, 00:27:52.391 "nvme_iov_md": false 00:27:52.391 }, 00:27:52.391 "memory_domains": [ 00:27:52.391 { 00:27:52.391 "dma_device_id": "system", 00:27:52.391 "dma_device_type": 1 00:27:52.391 }, 00:27:52.391 { 00:27:52.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:52.391 "dma_device_type": 2 00:27:52.391 } 00:27:52.391 ], 00:27:52.391 "driver_specific": {} 00:27:52.391 } 00:27:52.391 ] 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:52.391 BaseBdev4 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.391 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:52.392 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.392 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:52.392 [ 00:27:52.392 { 00:27:52.392 "name": "BaseBdev4", 00:27:52.392 "aliases": [ 00:27:52.392 "5e4abc7a-6ec9-4512-ad41-d1b2695da36f" 00:27:52.392 ], 00:27:52.392 "product_name": "Malloc disk", 00:27:52.392 "block_size": 512, 00:27:52.392 "num_blocks": 65536, 00:27:52.392 "uuid": "5e4abc7a-6ec9-4512-ad41-d1b2695da36f", 00:27:52.392 "assigned_rate_limits": { 00:27:52.392 "rw_ios_per_sec": 0, 00:27:52.392 "rw_mbytes_per_sec": 0, 00:27:52.392 "r_mbytes_per_sec": 0, 00:27:52.392 "w_mbytes_per_sec": 0 00:27:52.392 }, 00:27:52.392 "claimed": false, 00:27:52.392 "zoned": false, 00:27:52.392 "supported_io_types": { 00:27:52.392 "read": true, 00:27:52.392 "write": true, 00:27:52.392 "unmap": true, 00:27:52.392 "flush": true, 00:27:52.392 "reset": true, 00:27:52.392 "nvme_admin": false, 00:27:52.392 "nvme_io": false, 00:27:52.392 "nvme_io_md": false, 00:27:52.392 "write_zeroes": true, 00:27:52.392 "zcopy": true, 00:27:52.392 "get_zone_info": false, 00:27:52.392 "zone_management": false, 00:27:52.392 "zone_append": false, 00:27:52.392 "compare": false, 00:27:52.392 "compare_and_write": false, 00:27:52.392 "abort": true, 00:27:52.392 "seek_hole": false, 00:27:52.392 "seek_data": false, 00:27:52.392 "copy": true, 00:27:52.392 "nvme_iov_md": false 00:27:52.392 }, 00:27:52.392 "memory_domains": [ 00:27:52.392 { 00:27:52.392 "dma_device_id": "system", 00:27:52.392 "dma_device_type": 1 00:27:52.392 }, 00:27:52.392 { 00:27:52.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:52.392 "dma_device_type": 2 00:27:52.392 } 00:27:52.392 ], 00:27:52.392 "driver_specific": {} 00:27:52.392 } 00:27:52.392 ] 00:27:52.392 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.392 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:27:52.392 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:52.392 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:52.392 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:52.392 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.392 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:52.392 [2024-11-05 15:57:24.650499] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:52.392 [2024-11-05 15:57:24.650619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:52.392 [2024-11-05 15:57:24.650643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:52.392 [2024-11-05 15:57:24.652137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:52.392 [2024-11-05 15:57:24.652177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:52.392 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.392 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:52.392 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:52.392 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:52.392 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:52.392 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:52.392 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:52.392 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:52.392 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:52.392 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:52.392 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:52.392 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:52.392 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:52.392 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.392 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:52.392 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.392 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:52.392 "name": "Existed_Raid", 00:27:52.392 "uuid": "d7142932-e8f8-4504-9e68-4e4b1fb215a1", 00:27:52.392 "strip_size_kb": 64, 00:27:52.392 "state": "configuring", 00:27:52.392 "raid_level": "concat", 00:27:52.392 "superblock": true, 00:27:52.392 "num_base_bdevs": 4, 00:27:52.392 "num_base_bdevs_discovered": 3, 00:27:52.392 "num_base_bdevs_operational": 4, 00:27:52.392 "base_bdevs_list": [ 00:27:52.392 { 00:27:52.392 "name": "BaseBdev1", 00:27:52.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:52.392 "is_configured": false, 00:27:52.392 "data_offset": 0, 00:27:52.392 "data_size": 0 00:27:52.392 }, 00:27:52.392 { 00:27:52.392 "name": "BaseBdev2", 00:27:52.392 "uuid": "fc53a972-c93a-4532-84c6-f38a252a5755", 00:27:52.392 "is_configured": true, 00:27:52.392 "data_offset": 2048, 00:27:52.392 "data_size": 63488 00:27:52.392 }, 00:27:52.392 { 00:27:52.392 "name": "BaseBdev3", 00:27:52.392 "uuid": "7d227993-cffc-4ba0-9610-52bfb7500eb3", 00:27:52.392 "is_configured": true, 00:27:52.392 "data_offset": 2048, 00:27:52.392 "data_size": 63488 00:27:52.392 }, 00:27:52.392 { 00:27:52.392 "name": "BaseBdev4", 00:27:52.392 "uuid": "5e4abc7a-6ec9-4512-ad41-d1b2695da36f", 00:27:52.392 "is_configured": true, 00:27:52.392 "data_offset": 2048, 00:27:52.392 "data_size": 63488 00:27:52.392 } 00:27:52.392 ] 00:27:52.392 }' 00:27:52.392 15:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:52.392 15:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:52.650 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:27:52.650 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.650 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:52.650 [2024-11-05 15:57:25.010579] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:52.650 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.650 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:52.650 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:52.650 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:52.650 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:52.650 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:52.650 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:52.650 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:52.650 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:52.650 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:52.650 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:52.650 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:52.650 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:52.650 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.650 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:52.650 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.650 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:52.650 "name": "Existed_Raid", 00:27:52.650 "uuid": "d7142932-e8f8-4504-9e68-4e4b1fb215a1", 00:27:52.650 "strip_size_kb": 64, 00:27:52.651 "state": "configuring", 00:27:52.651 "raid_level": "concat", 00:27:52.651 "superblock": true, 00:27:52.651 "num_base_bdevs": 4, 00:27:52.651 "num_base_bdevs_discovered": 2, 00:27:52.651 "num_base_bdevs_operational": 4, 00:27:52.651 "base_bdevs_list": [ 00:27:52.651 { 00:27:52.651 "name": "BaseBdev1", 00:27:52.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:52.651 "is_configured": false, 00:27:52.651 "data_offset": 0, 00:27:52.651 "data_size": 0 00:27:52.651 }, 00:27:52.651 { 00:27:52.651 "name": null, 00:27:52.651 "uuid": "fc53a972-c93a-4532-84c6-f38a252a5755", 00:27:52.651 "is_configured": false, 00:27:52.651 "data_offset": 0, 00:27:52.651 "data_size": 63488 00:27:52.651 }, 00:27:52.651 { 00:27:52.651 "name": "BaseBdev3", 00:27:52.651 "uuid": "7d227993-cffc-4ba0-9610-52bfb7500eb3", 00:27:52.651 "is_configured": true, 00:27:52.651 "data_offset": 2048, 00:27:52.651 "data_size": 63488 00:27:52.651 }, 00:27:52.651 { 00:27:52.651 "name": "BaseBdev4", 00:27:52.651 "uuid": "5e4abc7a-6ec9-4512-ad41-d1b2695da36f", 00:27:52.651 "is_configured": true, 00:27:52.651 "data_offset": 2048, 00:27:52.651 "data_size": 63488 00:27:52.651 } 00:27:52.651 ] 00:27:52.651 }' 00:27:52.651 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:52.651 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:52.908 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:52.908 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:52.908 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.908 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:53.166 [2024-11-05 15:57:25.368869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:53.166 BaseBdev1 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:53.166 [ 00:27:53.166 { 00:27:53.166 "name": "BaseBdev1", 00:27:53.166 "aliases": [ 00:27:53.166 "fa372679-6722-4d26-9d70-8c915d979b2b" 00:27:53.166 ], 00:27:53.166 "product_name": "Malloc disk", 00:27:53.166 "block_size": 512, 00:27:53.166 "num_blocks": 65536, 00:27:53.166 "uuid": "fa372679-6722-4d26-9d70-8c915d979b2b", 00:27:53.166 "assigned_rate_limits": { 00:27:53.166 "rw_ios_per_sec": 0, 00:27:53.166 "rw_mbytes_per_sec": 0, 00:27:53.166 "r_mbytes_per_sec": 0, 00:27:53.166 "w_mbytes_per_sec": 0 00:27:53.166 }, 00:27:53.166 "claimed": true, 00:27:53.166 "claim_type": "exclusive_write", 00:27:53.166 "zoned": false, 00:27:53.166 "supported_io_types": { 00:27:53.166 "read": true, 00:27:53.166 "write": true, 00:27:53.166 "unmap": true, 00:27:53.166 "flush": true, 00:27:53.166 "reset": true, 00:27:53.166 "nvme_admin": false, 00:27:53.166 "nvme_io": false, 00:27:53.166 "nvme_io_md": false, 00:27:53.166 "write_zeroes": true, 00:27:53.166 "zcopy": true, 00:27:53.166 "get_zone_info": false, 00:27:53.166 "zone_management": false, 00:27:53.166 "zone_append": false, 00:27:53.166 "compare": false, 00:27:53.166 "compare_and_write": false, 00:27:53.166 "abort": true, 00:27:53.166 "seek_hole": false, 00:27:53.166 "seek_data": false, 00:27:53.166 "copy": true, 00:27:53.166 "nvme_iov_md": false 00:27:53.166 }, 00:27:53.166 "memory_domains": [ 00:27:53.166 { 00:27:53.166 "dma_device_id": "system", 00:27:53.166 "dma_device_type": 1 00:27:53.166 }, 00:27:53.166 { 00:27:53.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:53.166 "dma_device_type": 2 00:27:53.166 } 00:27:53.166 ], 00:27:53.166 "driver_specific": {} 00:27:53.166 } 00:27:53.166 ] 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:53.166 "name": "Existed_Raid", 00:27:53.166 "uuid": "d7142932-e8f8-4504-9e68-4e4b1fb215a1", 00:27:53.166 "strip_size_kb": 64, 00:27:53.166 "state": "configuring", 00:27:53.166 "raid_level": "concat", 00:27:53.166 "superblock": true, 00:27:53.166 "num_base_bdevs": 4, 00:27:53.166 "num_base_bdevs_discovered": 3, 00:27:53.166 "num_base_bdevs_operational": 4, 00:27:53.166 "base_bdevs_list": [ 00:27:53.166 { 00:27:53.166 "name": "BaseBdev1", 00:27:53.166 "uuid": "fa372679-6722-4d26-9d70-8c915d979b2b", 00:27:53.166 "is_configured": true, 00:27:53.166 "data_offset": 2048, 00:27:53.166 "data_size": 63488 00:27:53.166 }, 00:27:53.166 { 00:27:53.166 "name": null, 00:27:53.166 "uuid": "fc53a972-c93a-4532-84c6-f38a252a5755", 00:27:53.166 "is_configured": false, 00:27:53.166 "data_offset": 0, 00:27:53.166 "data_size": 63488 00:27:53.166 }, 00:27:53.166 { 00:27:53.166 "name": "BaseBdev3", 00:27:53.166 "uuid": "7d227993-cffc-4ba0-9610-52bfb7500eb3", 00:27:53.166 "is_configured": true, 00:27:53.166 "data_offset": 2048, 00:27:53.166 "data_size": 63488 00:27:53.166 }, 00:27:53.166 { 00:27:53.166 "name": "BaseBdev4", 00:27:53.166 "uuid": "5e4abc7a-6ec9-4512-ad41-d1b2695da36f", 00:27:53.166 "is_configured": true, 00:27:53.166 "data_offset": 2048, 00:27:53.166 "data_size": 63488 00:27:53.166 } 00:27:53.166 ] 00:27:53.166 }' 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:53.166 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:53.424 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:53.424 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:53.424 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.424 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:53.424 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.424 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:27:53.424 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:27:53.424 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.424 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:53.424 [2024-11-05 15:57:25.736993] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:53.424 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.424 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:53.424 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:53.424 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:53.424 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:53.424 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:53.424 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:53.424 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:53.424 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:53.424 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:53.424 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:53.424 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:53.424 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:53.424 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.424 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:53.424 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.424 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:53.424 "name": "Existed_Raid", 00:27:53.424 "uuid": "d7142932-e8f8-4504-9e68-4e4b1fb215a1", 00:27:53.424 "strip_size_kb": 64, 00:27:53.424 "state": "configuring", 00:27:53.424 "raid_level": "concat", 00:27:53.424 "superblock": true, 00:27:53.424 "num_base_bdevs": 4, 00:27:53.424 "num_base_bdevs_discovered": 2, 00:27:53.424 "num_base_bdevs_operational": 4, 00:27:53.424 "base_bdevs_list": [ 00:27:53.424 { 00:27:53.424 "name": "BaseBdev1", 00:27:53.424 "uuid": "fa372679-6722-4d26-9d70-8c915d979b2b", 00:27:53.424 "is_configured": true, 00:27:53.424 "data_offset": 2048, 00:27:53.424 "data_size": 63488 00:27:53.424 }, 00:27:53.424 { 00:27:53.424 "name": null, 00:27:53.424 "uuid": "fc53a972-c93a-4532-84c6-f38a252a5755", 00:27:53.424 "is_configured": false, 00:27:53.424 "data_offset": 0, 00:27:53.424 "data_size": 63488 00:27:53.424 }, 00:27:53.424 { 00:27:53.424 "name": null, 00:27:53.424 "uuid": "7d227993-cffc-4ba0-9610-52bfb7500eb3", 00:27:53.424 "is_configured": false, 00:27:53.424 "data_offset": 0, 00:27:53.424 "data_size": 63488 00:27:53.424 }, 00:27:53.424 { 00:27:53.424 "name": "BaseBdev4", 00:27:53.424 "uuid": "5e4abc7a-6ec9-4512-ad41-d1b2695da36f", 00:27:53.424 "is_configured": true, 00:27:53.424 "data_offset": 2048, 00:27:53.424 "data_size": 63488 00:27:53.424 } 00:27:53.424 ] 00:27:53.424 }' 00:27:53.424 15:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:53.424 15:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:53.682 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:53.682 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:53.682 15:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.682 15:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:53.682 15:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.682 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:27:53.682 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:27:53.682 15:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.682 15:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:53.682 [2024-11-05 15:57:26.085043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:53.682 15:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.682 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:53.682 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:53.682 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:53.682 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:53.682 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:53.682 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:53.682 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:53.682 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:53.682 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:53.682 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:53.682 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:53.682 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:53.682 15:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.682 15:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:53.939 15:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.939 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:53.939 "name": "Existed_Raid", 00:27:53.939 "uuid": "d7142932-e8f8-4504-9e68-4e4b1fb215a1", 00:27:53.939 "strip_size_kb": 64, 00:27:53.939 "state": "configuring", 00:27:53.939 "raid_level": "concat", 00:27:53.939 "superblock": true, 00:27:53.939 "num_base_bdevs": 4, 00:27:53.939 "num_base_bdevs_discovered": 3, 00:27:53.939 "num_base_bdevs_operational": 4, 00:27:53.939 "base_bdevs_list": [ 00:27:53.939 { 00:27:53.939 "name": "BaseBdev1", 00:27:53.939 "uuid": "fa372679-6722-4d26-9d70-8c915d979b2b", 00:27:53.939 "is_configured": true, 00:27:53.939 "data_offset": 2048, 00:27:53.940 "data_size": 63488 00:27:53.940 }, 00:27:53.940 { 00:27:53.940 "name": null, 00:27:53.940 "uuid": "fc53a972-c93a-4532-84c6-f38a252a5755", 00:27:53.940 "is_configured": false, 00:27:53.940 "data_offset": 0, 00:27:53.940 "data_size": 63488 00:27:53.940 }, 00:27:53.940 { 00:27:53.940 "name": "BaseBdev3", 00:27:53.940 "uuid": "7d227993-cffc-4ba0-9610-52bfb7500eb3", 00:27:53.940 "is_configured": true, 00:27:53.940 "data_offset": 2048, 00:27:53.940 "data_size": 63488 00:27:53.940 }, 00:27:53.940 { 00:27:53.940 "name": "BaseBdev4", 00:27:53.940 "uuid": "5e4abc7a-6ec9-4512-ad41-d1b2695da36f", 00:27:53.940 "is_configured": true, 00:27:53.940 "data_offset": 2048, 00:27:53.940 "data_size": 63488 00:27:53.940 } 00:27:53.940 ] 00:27:53.940 }' 00:27:53.940 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:53.940 15:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:54.197 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:54.197 15:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.197 15:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:54.197 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:54.197 15:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.197 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:27:54.197 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:27:54.197 15:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.197 15:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:54.198 [2024-11-05 15:57:26.437122] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:54.198 15:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.198 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:54.198 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:54.198 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:54.198 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:54.198 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:54.198 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:54.198 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:54.198 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:54.198 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:54.198 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:54.198 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:54.198 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:54.198 15:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.198 15:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:54.198 15:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.198 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:54.198 "name": "Existed_Raid", 00:27:54.198 "uuid": "d7142932-e8f8-4504-9e68-4e4b1fb215a1", 00:27:54.198 "strip_size_kb": 64, 00:27:54.198 "state": "configuring", 00:27:54.198 "raid_level": "concat", 00:27:54.198 "superblock": true, 00:27:54.198 "num_base_bdevs": 4, 00:27:54.198 "num_base_bdevs_discovered": 2, 00:27:54.198 "num_base_bdevs_operational": 4, 00:27:54.198 "base_bdevs_list": [ 00:27:54.198 { 00:27:54.198 "name": null, 00:27:54.198 "uuid": "fa372679-6722-4d26-9d70-8c915d979b2b", 00:27:54.198 "is_configured": false, 00:27:54.198 "data_offset": 0, 00:27:54.198 "data_size": 63488 00:27:54.198 }, 00:27:54.198 { 00:27:54.198 "name": null, 00:27:54.198 "uuid": "fc53a972-c93a-4532-84c6-f38a252a5755", 00:27:54.198 "is_configured": false, 00:27:54.198 "data_offset": 0, 00:27:54.198 "data_size": 63488 00:27:54.198 }, 00:27:54.198 { 00:27:54.198 "name": "BaseBdev3", 00:27:54.198 "uuid": "7d227993-cffc-4ba0-9610-52bfb7500eb3", 00:27:54.198 "is_configured": true, 00:27:54.198 "data_offset": 2048, 00:27:54.198 "data_size": 63488 00:27:54.198 }, 00:27:54.198 { 00:27:54.198 "name": "BaseBdev4", 00:27:54.198 "uuid": "5e4abc7a-6ec9-4512-ad41-d1b2695da36f", 00:27:54.198 "is_configured": true, 00:27:54.198 "data_offset": 2048, 00:27:54.198 "data_size": 63488 00:27:54.198 } 00:27:54.198 ] 00:27:54.198 }' 00:27:54.198 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:54.198 15:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:54.455 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:54.455 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:54.455 15:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.455 15:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:54.455 15:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.455 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:27:54.455 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:27:54.455 15:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.455 15:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:54.455 [2024-11-05 15:57:26.811679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:54.455 15:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.455 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:54.455 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:54.455 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:54.455 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:54.455 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:54.455 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:54.455 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:54.455 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:54.455 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:54.455 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:54.455 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:54.456 15:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.456 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:54.456 15:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:54.456 15:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.456 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:54.456 "name": "Existed_Raid", 00:27:54.456 "uuid": "d7142932-e8f8-4504-9e68-4e4b1fb215a1", 00:27:54.456 "strip_size_kb": 64, 00:27:54.456 "state": "configuring", 00:27:54.456 "raid_level": "concat", 00:27:54.456 "superblock": true, 00:27:54.456 "num_base_bdevs": 4, 00:27:54.456 "num_base_bdevs_discovered": 3, 00:27:54.456 "num_base_bdevs_operational": 4, 00:27:54.456 "base_bdevs_list": [ 00:27:54.456 { 00:27:54.456 "name": null, 00:27:54.456 "uuid": "fa372679-6722-4d26-9d70-8c915d979b2b", 00:27:54.456 "is_configured": false, 00:27:54.456 "data_offset": 0, 00:27:54.456 "data_size": 63488 00:27:54.456 }, 00:27:54.456 { 00:27:54.456 "name": "BaseBdev2", 00:27:54.456 "uuid": "fc53a972-c93a-4532-84c6-f38a252a5755", 00:27:54.456 "is_configured": true, 00:27:54.456 "data_offset": 2048, 00:27:54.456 "data_size": 63488 00:27:54.456 }, 00:27:54.456 { 00:27:54.456 "name": "BaseBdev3", 00:27:54.456 "uuid": "7d227993-cffc-4ba0-9610-52bfb7500eb3", 00:27:54.456 "is_configured": true, 00:27:54.456 "data_offset": 2048, 00:27:54.456 "data_size": 63488 00:27:54.456 }, 00:27:54.456 { 00:27:54.456 "name": "BaseBdev4", 00:27:54.456 "uuid": "5e4abc7a-6ec9-4512-ad41-d1b2695da36f", 00:27:54.456 "is_configured": true, 00:27:54.456 "data_offset": 2048, 00:27:54.456 "data_size": 63488 00:27:54.456 } 00:27:54.456 ] 00:27:54.456 }' 00:27:54.456 15:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:54.456 15:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:54.712 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:54.712 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:54.712 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.712 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:54.712 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.969 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:27:54.969 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:27:54.969 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:54.969 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.969 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:54.969 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.969 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fa372679-6722-4d26-9d70-8c915d979b2b 00:27:54.969 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.969 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:54.969 [2024-11-05 15:57:27.193859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:27:54.969 [2024-11-05 15:57:27.194024] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:27:54.969 [2024-11-05 15:57:27.194033] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:54.969 [2024-11-05 15:57:27.194232] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:27:54.969 [2024-11-05 15:57:27.194330] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:27:54.969 [2024-11-05 15:57:27.194342] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:27:54.969 [2024-11-05 15:57:27.194442] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:54.969 NewBaseBdev 00:27:54.969 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.969 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:27:54.969 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:27:54.969 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:54.969 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:27:54.969 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:54.969 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:54.969 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:54.969 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.969 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:54.969 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.969 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:27:54.969 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.969 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:54.969 [ 00:27:54.969 { 00:27:54.969 "name": "NewBaseBdev", 00:27:54.969 "aliases": [ 00:27:54.969 "fa372679-6722-4d26-9d70-8c915d979b2b" 00:27:54.969 ], 00:27:54.969 "product_name": "Malloc disk", 00:27:54.969 "block_size": 512, 00:27:54.969 "num_blocks": 65536, 00:27:54.969 "uuid": "fa372679-6722-4d26-9d70-8c915d979b2b", 00:27:54.969 "assigned_rate_limits": { 00:27:54.969 "rw_ios_per_sec": 0, 00:27:54.969 "rw_mbytes_per_sec": 0, 00:27:54.969 "r_mbytes_per_sec": 0, 00:27:54.969 "w_mbytes_per_sec": 0 00:27:54.969 }, 00:27:54.969 "claimed": true, 00:27:54.969 "claim_type": "exclusive_write", 00:27:54.969 "zoned": false, 00:27:54.969 "supported_io_types": { 00:27:54.969 "read": true, 00:27:54.969 "write": true, 00:27:54.969 "unmap": true, 00:27:54.969 "flush": true, 00:27:54.969 "reset": true, 00:27:54.969 "nvme_admin": false, 00:27:54.969 "nvme_io": false, 00:27:54.969 "nvme_io_md": false, 00:27:54.969 "write_zeroes": true, 00:27:54.969 "zcopy": true, 00:27:54.969 "get_zone_info": false, 00:27:54.969 "zone_management": false, 00:27:54.969 "zone_append": false, 00:27:54.969 "compare": false, 00:27:54.969 "compare_and_write": false, 00:27:54.969 "abort": true, 00:27:54.969 "seek_hole": false, 00:27:54.969 "seek_data": false, 00:27:54.969 "copy": true, 00:27:54.969 "nvme_iov_md": false 00:27:54.969 }, 00:27:54.969 "memory_domains": [ 00:27:54.969 { 00:27:54.969 "dma_device_id": "system", 00:27:54.969 "dma_device_type": 1 00:27:54.969 }, 00:27:54.969 { 00:27:54.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:54.969 "dma_device_type": 2 00:27:54.969 } 00:27:54.969 ], 00:27:54.969 "driver_specific": {} 00:27:54.969 } 00:27:54.969 ] 00:27:54.969 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.969 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:27:54.969 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:27:54.970 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:54.970 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:54.970 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:54.970 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:54.970 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:54.970 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:54.970 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:54.970 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:54.970 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:54.970 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:54.970 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:54.970 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.970 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:54.970 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.970 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:54.970 "name": "Existed_Raid", 00:27:54.970 "uuid": "d7142932-e8f8-4504-9e68-4e4b1fb215a1", 00:27:54.970 "strip_size_kb": 64, 00:27:54.970 "state": "online", 00:27:54.970 "raid_level": "concat", 00:27:54.970 "superblock": true, 00:27:54.970 "num_base_bdevs": 4, 00:27:54.970 "num_base_bdevs_discovered": 4, 00:27:54.970 "num_base_bdevs_operational": 4, 00:27:54.970 "base_bdevs_list": [ 00:27:54.970 { 00:27:54.970 "name": "NewBaseBdev", 00:27:54.970 "uuid": "fa372679-6722-4d26-9d70-8c915d979b2b", 00:27:54.970 "is_configured": true, 00:27:54.970 "data_offset": 2048, 00:27:54.970 "data_size": 63488 00:27:54.970 }, 00:27:54.970 { 00:27:54.970 "name": "BaseBdev2", 00:27:54.970 "uuid": "fc53a972-c93a-4532-84c6-f38a252a5755", 00:27:54.970 "is_configured": true, 00:27:54.970 "data_offset": 2048, 00:27:54.970 "data_size": 63488 00:27:54.970 }, 00:27:54.970 { 00:27:54.970 "name": "BaseBdev3", 00:27:54.970 "uuid": "7d227993-cffc-4ba0-9610-52bfb7500eb3", 00:27:54.970 "is_configured": true, 00:27:54.970 "data_offset": 2048, 00:27:54.970 "data_size": 63488 00:27:54.970 }, 00:27:54.970 { 00:27:54.970 "name": "BaseBdev4", 00:27:54.970 "uuid": "5e4abc7a-6ec9-4512-ad41-d1b2695da36f", 00:27:54.970 "is_configured": true, 00:27:54.970 "data_offset": 2048, 00:27:54.970 "data_size": 63488 00:27:54.970 } 00:27:54.970 ] 00:27:54.970 }' 00:27:54.970 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:54.970 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:55.227 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:27:55.227 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:27:55.227 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:55.227 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:55.227 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:27:55.227 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:55.228 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:55.228 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:27:55.228 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.228 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:55.228 [2024-11-05 15:57:27.526298] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:55.228 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.228 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:55.228 "name": "Existed_Raid", 00:27:55.228 "aliases": [ 00:27:55.228 "d7142932-e8f8-4504-9e68-4e4b1fb215a1" 00:27:55.228 ], 00:27:55.228 "product_name": "Raid Volume", 00:27:55.228 "block_size": 512, 00:27:55.228 "num_blocks": 253952, 00:27:55.228 "uuid": "d7142932-e8f8-4504-9e68-4e4b1fb215a1", 00:27:55.228 "assigned_rate_limits": { 00:27:55.228 "rw_ios_per_sec": 0, 00:27:55.228 "rw_mbytes_per_sec": 0, 00:27:55.228 "r_mbytes_per_sec": 0, 00:27:55.228 "w_mbytes_per_sec": 0 00:27:55.228 }, 00:27:55.228 "claimed": false, 00:27:55.228 "zoned": false, 00:27:55.228 "supported_io_types": { 00:27:55.228 "read": true, 00:27:55.228 "write": true, 00:27:55.228 "unmap": true, 00:27:55.228 "flush": true, 00:27:55.228 "reset": true, 00:27:55.228 "nvme_admin": false, 00:27:55.228 "nvme_io": false, 00:27:55.228 "nvme_io_md": false, 00:27:55.228 "write_zeroes": true, 00:27:55.228 "zcopy": false, 00:27:55.228 "get_zone_info": false, 00:27:55.228 "zone_management": false, 00:27:55.228 "zone_append": false, 00:27:55.228 "compare": false, 00:27:55.228 "compare_and_write": false, 00:27:55.228 "abort": false, 00:27:55.228 "seek_hole": false, 00:27:55.228 "seek_data": false, 00:27:55.228 "copy": false, 00:27:55.228 "nvme_iov_md": false 00:27:55.228 }, 00:27:55.228 "memory_domains": [ 00:27:55.228 { 00:27:55.228 "dma_device_id": "system", 00:27:55.228 "dma_device_type": 1 00:27:55.228 }, 00:27:55.228 { 00:27:55.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:55.228 "dma_device_type": 2 00:27:55.228 }, 00:27:55.228 { 00:27:55.228 "dma_device_id": "system", 00:27:55.228 "dma_device_type": 1 00:27:55.228 }, 00:27:55.228 { 00:27:55.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:55.228 "dma_device_type": 2 00:27:55.228 }, 00:27:55.228 { 00:27:55.228 "dma_device_id": "system", 00:27:55.228 "dma_device_type": 1 00:27:55.228 }, 00:27:55.228 { 00:27:55.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:55.228 "dma_device_type": 2 00:27:55.228 }, 00:27:55.228 { 00:27:55.228 "dma_device_id": "system", 00:27:55.228 "dma_device_type": 1 00:27:55.228 }, 00:27:55.228 { 00:27:55.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:55.228 "dma_device_type": 2 00:27:55.228 } 00:27:55.228 ], 00:27:55.228 "driver_specific": { 00:27:55.228 "raid": { 00:27:55.228 "uuid": "d7142932-e8f8-4504-9e68-4e4b1fb215a1", 00:27:55.228 "strip_size_kb": 64, 00:27:55.228 "state": "online", 00:27:55.228 "raid_level": "concat", 00:27:55.228 "superblock": true, 00:27:55.228 "num_base_bdevs": 4, 00:27:55.228 "num_base_bdevs_discovered": 4, 00:27:55.228 "num_base_bdevs_operational": 4, 00:27:55.228 "base_bdevs_list": [ 00:27:55.228 { 00:27:55.228 "name": "NewBaseBdev", 00:27:55.228 "uuid": "fa372679-6722-4d26-9d70-8c915d979b2b", 00:27:55.228 "is_configured": true, 00:27:55.228 "data_offset": 2048, 00:27:55.228 "data_size": 63488 00:27:55.228 }, 00:27:55.228 { 00:27:55.228 "name": "BaseBdev2", 00:27:55.228 "uuid": "fc53a972-c93a-4532-84c6-f38a252a5755", 00:27:55.228 "is_configured": true, 00:27:55.228 "data_offset": 2048, 00:27:55.228 "data_size": 63488 00:27:55.228 }, 00:27:55.228 { 00:27:55.228 "name": "BaseBdev3", 00:27:55.228 "uuid": "7d227993-cffc-4ba0-9610-52bfb7500eb3", 00:27:55.228 "is_configured": true, 00:27:55.228 "data_offset": 2048, 00:27:55.228 "data_size": 63488 00:27:55.228 }, 00:27:55.228 { 00:27:55.228 "name": "BaseBdev4", 00:27:55.228 "uuid": "5e4abc7a-6ec9-4512-ad41-d1b2695da36f", 00:27:55.228 "is_configured": true, 00:27:55.228 "data_offset": 2048, 00:27:55.228 "data_size": 63488 00:27:55.228 } 00:27:55.228 ] 00:27:55.228 } 00:27:55.228 } 00:27:55.228 }' 00:27:55.228 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:55.228 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:27:55.228 BaseBdev2 00:27:55.228 BaseBdev3 00:27:55.228 BaseBdev4' 00:27:55.228 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:55.228 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:55.228 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:55.228 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:27:55.228 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.228 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:55.228 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:55.228 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.228 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:55.228 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:55.228 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:55.228 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:55.228 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:27:55.228 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.228 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:55.486 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.486 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:55.486 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:55.486 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:55.486 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:27:55.486 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.486 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:55.486 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:55.486 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.486 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:55.486 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:55.486 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:55.486 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:55.486 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:27:55.486 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.486 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:55.486 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.486 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:55.486 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:55.486 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:55.486 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.486 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:55.486 [2024-11-05 15:57:27.742042] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:55.486 [2024-11-05 15:57:27.742065] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:55.486 [2024-11-05 15:57:27.742117] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:55.486 [2024-11-05 15:57:27.742170] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:55.486 [2024-11-05 15:57:27.742178] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:27:55.486 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.486 15:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 69887 00:27:55.486 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 69887 ']' 00:27:55.486 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 69887 00:27:55.486 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:27:55.486 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:55.486 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69887 00:27:55.486 killing process with pid 69887 00:27:55.486 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:55.486 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:55.486 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69887' 00:27:55.486 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 69887 00:27:55.486 [2024-11-05 15:57:27.770769] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:55.486 15:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 69887 00:27:55.744 [2024-11-05 15:57:27.965387] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:56.309 ************************************ 00:27:56.309 END TEST raid_state_function_test_sb 00:27:56.309 15:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:27:56.309 00:27:56.309 real 0m7.988s 00:27:56.309 user 0m12.987s 00:27:56.309 sys 0m1.276s 00:27:56.309 15:57:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:56.309 15:57:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:56.309 ************************************ 00:27:56.309 15:57:28 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:27:56.309 15:57:28 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:56.309 15:57:28 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:56.309 15:57:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:56.309 ************************************ 00:27:56.309 START TEST raid_superblock_test 00:27:56.309 ************************************ 00:27:56.309 15:57:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 4 00:27:56.309 15:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:27:56.309 15:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:27:56.309 15:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:27:56.309 15:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:27:56.309 15:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:27:56.309 15:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:27:56.309 15:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:27:56.309 15:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:27:56.309 15:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:27:56.309 15:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:27:56.309 15:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:27:56.309 15:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:27:56.309 15:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:27:56.309 15:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:27:56.309 15:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:27:56.309 15:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:27:56.309 15:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70520 00:27:56.309 15:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70520 00:27:56.309 15:57:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 70520 ']' 00:27:56.309 15:57:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:56.309 15:57:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:56.309 15:57:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:56.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:56.309 15:57:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:56.309 15:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:27:56.309 15:57:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:56.309 [2024-11-05 15:57:28.615962] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:27:56.309 [2024-11-05 15:57:28.616057] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70520 ] 00:27:56.567 [2024-11-05 15:57:28.764057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.567 [2024-11-05 15:57:28.845827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.567 [2024-11-05 15:57:28.954932] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:56.567 [2024-11-05 15:57:28.954974] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.133 malloc1 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.133 [2024-11-05 15:57:29.492325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:57.133 [2024-11-05 15:57:29.492379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:57.133 [2024-11-05 15:57:29.492397] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:27:57.133 [2024-11-05 15:57:29.492405] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:57.133 [2024-11-05 15:57:29.494104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:57.133 [2024-11-05 15:57:29.494133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:57.133 pt1 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.133 malloc2 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.133 [2024-11-05 15:57:29.523664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:57.133 [2024-11-05 15:57:29.523815] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:57.133 [2024-11-05 15:57:29.523838] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:57.133 [2024-11-05 15:57:29.523860] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:57.133 [2024-11-05 15:57:29.525560] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:57.133 [2024-11-05 15:57:29.525589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:57.133 pt2 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.133 15:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.393 malloc3 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.393 [2024-11-05 15:57:29.572190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:57.393 [2024-11-05 15:57:29.572233] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:57.393 [2024-11-05 15:57:29.572250] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:27:57.393 [2024-11-05 15:57:29.572258] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:57.393 [2024-11-05 15:57:29.573946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:57.393 [2024-11-05 15:57:29.574062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:57.393 pt3 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.393 malloc4 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.393 [2024-11-05 15:57:29.603365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:27:57.393 [2024-11-05 15:57:29.603401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:57.393 [2024-11-05 15:57:29.603414] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:27:57.393 [2024-11-05 15:57:29.603421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:57.393 [2024-11-05 15:57:29.605102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:57.393 [2024-11-05 15:57:29.605207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:27:57.393 pt4 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.393 [2024-11-05 15:57:29.611396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:57.393 [2024-11-05 15:57:29.612942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:57.393 [2024-11-05 15:57:29.613008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:57.393 [2024-11-05 15:57:29.613081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:27:57.393 [2024-11-05 15:57:29.613276] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:27:57.393 [2024-11-05 15:57:29.613328] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:57.393 [2024-11-05 15:57:29.613550] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:57.393 [2024-11-05 15:57:29.613715] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:27:57.393 [2024-11-05 15:57:29.613766] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:27:57.393 [2024-11-05 15:57:29.613930] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.393 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:57.393 "name": "raid_bdev1", 00:27:57.393 "uuid": "5447ffc1-1e4a-4ba3-954e-812b89dfcc33", 00:27:57.393 "strip_size_kb": 64, 00:27:57.393 "state": "online", 00:27:57.393 "raid_level": "concat", 00:27:57.393 "superblock": true, 00:27:57.393 "num_base_bdevs": 4, 00:27:57.393 "num_base_bdevs_discovered": 4, 00:27:57.394 "num_base_bdevs_operational": 4, 00:27:57.394 "base_bdevs_list": [ 00:27:57.394 { 00:27:57.394 "name": "pt1", 00:27:57.394 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:57.394 "is_configured": true, 00:27:57.394 "data_offset": 2048, 00:27:57.394 "data_size": 63488 00:27:57.394 }, 00:27:57.394 { 00:27:57.394 "name": "pt2", 00:27:57.394 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:57.394 "is_configured": true, 00:27:57.394 "data_offset": 2048, 00:27:57.394 "data_size": 63488 00:27:57.394 }, 00:27:57.394 { 00:27:57.394 "name": "pt3", 00:27:57.394 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:57.394 "is_configured": true, 00:27:57.394 "data_offset": 2048, 00:27:57.394 "data_size": 63488 00:27:57.394 }, 00:27:57.394 { 00:27:57.394 "name": "pt4", 00:27:57.394 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:57.394 "is_configured": true, 00:27:57.394 "data_offset": 2048, 00:27:57.394 "data_size": 63488 00:27:57.394 } 00:27:57.394 ] 00:27:57.394 }' 00:27:57.394 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:57.394 15:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.657 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:27:57.657 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:27:57.657 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:57.657 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:57.657 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:27:57.657 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:57.657 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:57.657 15:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.657 15:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.657 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:57.657 [2024-11-05 15:57:29.951722] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:57.657 15:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.657 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:57.657 "name": "raid_bdev1", 00:27:57.657 "aliases": [ 00:27:57.657 "5447ffc1-1e4a-4ba3-954e-812b89dfcc33" 00:27:57.657 ], 00:27:57.657 "product_name": "Raid Volume", 00:27:57.657 "block_size": 512, 00:27:57.657 "num_blocks": 253952, 00:27:57.657 "uuid": "5447ffc1-1e4a-4ba3-954e-812b89dfcc33", 00:27:57.657 "assigned_rate_limits": { 00:27:57.657 "rw_ios_per_sec": 0, 00:27:57.657 "rw_mbytes_per_sec": 0, 00:27:57.657 "r_mbytes_per_sec": 0, 00:27:57.657 "w_mbytes_per_sec": 0 00:27:57.657 }, 00:27:57.657 "claimed": false, 00:27:57.657 "zoned": false, 00:27:57.657 "supported_io_types": { 00:27:57.657 "read": true, 00:27:57.657 "write": true, 00:27:57.657 "unmap": true, 00:27:57.657 "flush": true, 00:27:57.657 "reset": true, 00:27:57.657 "nvme_admin": false, 00:27:57.657 "nvme_io": false, 00:27:57.657 "nvme_io_md": false, 00:27:57.657 "write_zeroes": true, 00:27:57.657 "zcopy": false, 00:27:57.657 "get_zone_info": false, 00:27:57.657 "zone_management": false, 00:27:57.657 "zone_append": false, 00:27:57.657 "compare": false, 00:27:57.657 "compare_and_write": false, 00:27:57.657 "abort": false, 00:27:57.657 "seek_hole": false, 00:27:57.657 "seek_data": false, 00:27:57.657 "copy": false, 00:27:57.657 "nvme_iov_md": false 00:27:57.657 }, 00:27:57.657 "memory_domains": [ 00:27:57.657 { 00:27:57.657 "dma_device_id": "system", 00:27:57.657 "dma_device_type": 1 00:27:57.657 }, 00:27:57.657 { 00:27:57.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:57.657 "dma_device_type": 2 00:27:57.657 }, 00:27:57.657 { 00:27:57.657 "dma_device_id": "system", 00:27:57.657 "dma_device_type": 1 00:27:57.657 }, 00:27:57.657 { 00:27:57.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:57.657 "dma_device_type": 2 00:27:57.657 }, 00:27:57.657 { 00:27:57.657 "dma_device_id": "system", 00:27:57.657 "dma_device_type": 1 00:27:57.657 }, 00:27:57.657 { 00:27:57.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:57.657 "dma_device_type": 2 00:27:57.657 }, 00:27:57.657 { 00:27:57.657 "dma_device_id": "system", 00:27:57.657 "dma_device_type": 1 00:27:57.657 }, 00:27:57.657 { 00:27:57.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:57.657 "dma_device_type": 2 00:27:57.657 } 00:27:57.657 ], 00:27:57.657 "driver_specific": { 00:27:57.657 "raid": { 00:27:57.657 "uuid": "5447ffc1-1e4a-4ba3-954e-812b89dfcc33", 00:27:57.657 "strip_size_kb": 64, 00:27:57.657 "state": "online", 00:27:57.657 "raid_level": "concat", 00:27:57.657 "superblock": true, 00:27:57.657 "num_base_bdevs": 4, 00:27:57.657 "num_base_bdevs_discovered": 4, 00:27:57.657 "num_base_bdevs_operational": 4, 00:27:57.657 "base_bdevs_list": [ 00:27:57.657 { 00:27:57.657 "name": "pt1", 00:27:57.657 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:57.657 "is_configured": true, 00:27:57.657 "data_offset": 2048, 00:27:57.657 "data_size": 63488 00:27:57.657 }, 00:27:57.657 { 00:27:57.657 "name": "pt2", 00:27:57.657 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:57.657 "is_configured": true, 00:27:57.657 "data_offset": 2048, 00:27:57.657 "data_size": 63488 00:27:57.657 }, 00:27:57.657 { 00:27:57.657 "name": "pt3", 00:27:57.657 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:57.657 "is_configured": true, 00:27:57.657 "data_offset": 2048, 00:27:57.657 "data_size": 63488 00:27:57.657 }, 00:27:57.657 { 00:27:57.657 "name": "pt4", 00:27:57.657 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:57.657 "is_configured": true, 00:27:57.657 "data_offset": 2048, 00:27:57.657 "data_size": 63488 00:27:57.657 } 00:27:57.657 ] 00:27:57.657 } 00:27:57.657 } 00:27:57.657 }' 00:27:57.657 15:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:57.657 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:27:57.657 pt2 00:27:57.657 pt3 00:27:57.657 pt4' 00:27:57.657 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:57.657 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:57.657 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:57.657 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:27:57.657 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.657 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:57.658 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.658 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.658 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:57.658 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:57.658 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:57.658 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:27:57.658 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.658 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.658 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:27:57.916 [2024-11-05 15:57:30.159728] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5447ffc1-1e4a-4ba3-954e-812b89dfcc33 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5447ffc1-1e4a-4ba3-954e-812b89dfcc33 ']' 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.916 [2024-11-05 15:57:30.191467] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:57.916 [2024-11-05 15:57:30.191563] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:57.916 [2024-11-05 15:57:30.191665] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:57.916 [2024-11-05 15:57:30.191770] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:57.916 [2024-11-05 15:57:30.191893] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.916 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.916 [2024-11-05 15:57:30.303518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:27:57.916 [2024-11-05 15:57:30.305074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:27:57.916 [2024-11-05 15:57:30.305113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:27:57.916 [2024-11-05 15:57:30.305140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:27:57.917 [2024-11-05 15:57:30.305179] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:27:57.917 [2024-11-05 15:57:30.305222] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:27:57.917 [2024-11-05 15:57:30.305237] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:27:57.917 [2024-11-05 15:57:30.305253] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:27:57.917 [2024-11-05 15:57:30.305262] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:57.917 [2024-11-05 15:57:30.305272] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:27:57.917 request: 00:27:57.917 { 00:27:57.917 "name": "raid_bdev1", 00:27:57.917 "raid_level": "concat", 00:27:57.917 "base_bdevs": [ 00:27:57.917 "malloc1", 00:27:57.917 "malloc2", 00:27:57.917 "malloc3", 00:27:57.917 "malloc4" 00:27:57.917 ], 00:27:57.917 "strip_size_kb": 64, 00:27:57.917 "superblock": false, 00:27:57.917 "method": "bdev_raid_create", 00:27:57.917 "req_id": 1 00:27:57.917 } 00:27:57.917 Got JSON-RPC error response 00:27:57.917 response: 00:27:57.917 { 00:27:57.917 "code": -17, 00:27:57.917 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:27:57.917 } 00:27:57.917 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:57.917 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:27:57.917 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:57.917 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:57.917 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:57.917 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:27:57.917 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:57.917 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.917 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.917 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.174 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:27:58.174 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:27:58.174 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:58.174 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.174 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:58.174 [2024-11-05 15:57:30.347500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:58.174 [2024-11-05 15:57:30.347542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:58.174 [2024-11-05 15:57:30.347554] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:27:58.174 [2024-11-05 15:57:30.347563] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:58.174 [2024-11-05 15:57:30.349328] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:58.175 [2024-11-05 15:57:30.349441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:58.175 [2024-11-05 15:57:30.349511] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:58.175 [2024-11-05 15:57:30.349558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:58.175 pt1 00:27:58.175 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.175 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:27:58.175 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:58.175 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:58.175 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:58.175 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:58.175 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:58.175 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:58.175 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:58.175 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:58.175 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:58.175 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:58.175 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.175 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:58.175 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:58.175 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.175 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:58.175 "name": "raid_bdev1", 00:27:58.175 "uuid": "5447ffc1-1e4a-4ba3-954e-812b89dfcc33", 00:27:58.175 "strip_size_kb": 64, 00:27:58.175 "state": "configuring", 00:27:58.175 "raid_level": "concat", 00:27:58.175 "superblock": true, 00:27:58.175 "num_base_bdevs": 4, 00:27:58.175 "num_base_bdevs_discovered": 1, 00:27:58.175 "num_base_bdevs_operational": 4, 00:27:58.175 "base_bdevs_list": [ 00:27:58.175 { 00:27:58.175 "name": "pt1", 00:27:58.175 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:58.175 "is_configured": true, 00:27:58.175 "data_offset": 2048, 00:27:58.175 "data_size": 63488 00:27:58.175 }, 00:27:58.175 { 00:27:58.175 "name": null, 00:27:58.175 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:58.175 "is_configured": false, 00:27:58.175 "data_offset": 2048, 00:27:58.175 "data_size": 63488 00:27:58.175 }, 00:27:58.175 { 00:27:58.175 "name": null, 00:27:58.175 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:58.175 "is_configured": false, 00:27:58.175 "data_offset": 2048, 00:27:58.175 "data_size": 63488 00:27:58.175 }, 00:27:58.175 { 00:27:58.175 "name": null, 00:27:58.175 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:58.175 "is_configured": false, 00:27:58.175 "data_offset": 2048, 00:27:58.175 "data_size": 63488 00:27:58.175 } 00:27:58.175 ] 00:27:58.175 }' 00:27:58.175 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:58.175 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:58.433 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:27:58.433 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:58.433 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.433 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:58.433 [2024-11-05 15:57:30.655579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:58.433 [2024-11-05 15:57:30.655635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:58.433 [2024-11-05 15:57:30.655650] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:27:58.433 [2024-11-05 15:57:30.655660] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:58.433 [2024-11-05 15:57:30.656002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:58.433 [2024-11-05 15:57:30.656016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:58.433 [2024-11-05 15:57:30.656084] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:58.433 [2024-11-05 15:57:30.656101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:58.433 pt2 00:27:58.433 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.433 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:27:58.433 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.433 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:58.433 [2024-11-05 15:57:30.663578] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:27:58.433 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.433 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:27:58.433 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:58.433 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:58.433 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:58.433 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:58.433 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:58.433 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:58.433 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:58.433 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:58.433 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:58.433 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:58.433 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.433 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:58.433 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:58.433 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.433 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:58.433 "name": "raid_bdev1", 00:27:58.433 "uuid": "5447ffc1-1e4a-4ba3-954e-812b89dfcc33", 00:27:58.433 "strip_size_kb": 64, 00:27:58.433 "state": "configuring", 00:27:58.433 "raid_level": "concat", 00:27:58.433 "superblock": true, 00:27:58.433 "num_base_bdevs": 4, 00:27:58.433 "num_base_bdevs_discovered": 1, 00:27:58.433 "num_base_bdevs_operational": 4, 00:27:58.433 "base_bdevs_list": [ 00:27:58.433 { 00:27:58.433 "name": "pt1", 00:27:58.433 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:58.433 "is_configured": true, 00:27:58.433 "data_offset": 2048, 00:27:58.433 "data_size": 63488 00:27:58.433 }, 00:27:58.433 { 00:27:58.433 "name": null, 00:27:58.433 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:58.433 "is_configured": false, 00:27:58.433 "data_offset": 0, 00:27:58.433 "data_size": 63488 00:27:58.433 }, 00:27:58.433 { 00:27:58.433 "name": null, 00:27:58.433 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:58.433 "is_configured": false, 00:27:58.433 "data_offset": 2048, 00:27:58.433 "data_size": 63488 00:27:58.433 }, 00:27:58.433 { 00:27:58.433 "name": null, 00:27:58.433 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:58.433 "is_configured": false, 00:27:58.433 "data_offset": 2048, 00:27:58.433 "data_size": 63488 00:27:58.433 } 00:27:58.433 ] 00:27:58.433 }' 00:27:58.433 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:58.433 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:58.691 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:27:58.691 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:58.691 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:58.691 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.691 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:58.691 [2024-11-05 15:57:30.975632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:58.691 [2024-11-05 15:57:30.975679] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:58.691 [2024-11-05 15:57:30.975693] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:27:58.691 [2024-11-05 15:57:30.975701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:58.691 [2024-11-05 15:57:30.976046] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:58.691 [2024-11-05 15:57:30.976062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:58.691 [2024-11-05 15:57:30.976122] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:58.691 [2024-11-05 15:57:30.976137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:58.691 pt2 00:27:58.691 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.691 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:27:58.691 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:58.691 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:58.691 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.691 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:58.691 [2024-11-05 15:57:30.983614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:58.691 [2024-11-05 15:57:30.983648] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:58.691 [2024-11-05 15:57:30.983663] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:27:58.691 [2024-11-05 15:57:30.983669] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:58.691 [2024-11-05 15:57:30.983957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:58.691 [2024-11-05 15:57:30.983971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:58.691 [2024-11-05 15:57:30.984015] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:27:58.691 [2024-11-05 15:57:30.984028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:58.691 pt3 00:27:58.691 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.691 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:27:58.691 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:58.691 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:27:58.691 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.691 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:58.691 [2024-11-05 15:57:30.991600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:27:58.691 [2024-11-05 15:57:30.991632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:58.691 [2024-11-05 15:57:30.991644] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:27:58.691 [2024-11-05 15:57:30.991650] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:58.691 [2024-11-05 15:57:30.991932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:58.691 [2024-11-05 15:57:30.991946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:27:58.691 [2024-11-05 15:57:30.991988] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:27:58.691 [2024-11-05 15:57:30.992000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:27:58.692 [2024-11-05 15:57:30.992099] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:58.692 [2024-11-05 15:57:30.992105] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:58.692 [2024-11-05 15:57:30.992294] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:27:58.692 [2024-11-05 15:57:30.992398] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:58.692 [2024-11-05 15:57:30.992406] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:27:58.692 [2024-11-05 15:57:30.992497] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:58.692 pt4 00:27:58.692 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.692 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:27:58.692 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:58.692 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:27:58.692 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:58.692 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:58.692 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:58.692 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:58.692 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:58.692 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:58.692 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:58.692 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:58.692 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:58.692 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:58.692 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.692 15:57:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:58.692 15:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:58.692 15:57:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.692 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:58.692 "name": "raid_bdev1", 00:27:58.692 "uuid": "5447ffc1-1e4a-4ba3-954e-812b89dfcc33", 00:27:58.692 "strip_size_kb": 64, 00:27:58.692 "state": "online", 00:27:58.692 "raid_level": "concat", 00:27:58.692 "superblock": true, 00:27:58.692 "num_base_bdevs": 4, 00:27:58.692 "num_base_bdevs_discovered": 4, 00:27:58.692 "num_base_bdevs_operational": 4, 00:27:58.692 "base_bdevs_list": [ 00:27:58.692 { 00:27:58.692 "name": "pt1", 00:27:58.692 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:58.692 "is_configured": true, 00:27:58.692 "data_offset": 2048, 00:27:58.692 "data_size": 63488 00:27:58.692 }, 00:27:58.692 { 00:27:58.692 "name": "pt2", 00:27:58.692 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:58.692 "is_configured": true, 00:27:58.692 "data_offset": 2048, 00:27:58.692 "data_size": 63488 00:27:58.692 }, 00:27:58.692 { 00:27:58.692 "name": "pt3", 00:27:58.692 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:58.692 "is_configured": true, 00:27:58.692 "data_offset": 2048, 00:27:58.692 "data_size": 63488 00:27:58.692 }, 00:27:58.692 { 00:27:58.692 "name": "pt4", 00:27:58.692 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:58.692 "is_configured": true, 00:27:58.692 "data_offset": 2048, 00:27:58.692 "data_size": 63488 00:27:58.692 } 00:27:58.692 ] 00:27:58.692 }' 00:27:58.692 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:58.692 15:57:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:58.949 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:27:58.949 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:27:58.949 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:58.949 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:58.949 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:27:58.949 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:58.949 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:58.949 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:58.949 15:57:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.949 15:57:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:58.949 [2024-11-05 15:57:31.307993] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:58.949 15:57:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.949 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:58.949 "name": "raid_bdev1", 00:27:58.949 "aliases": [ 00:27:58.949 "5447ffc1-1e4a-4ba3-954e-812b89dfcc33" 00:27:58.949 ], 00:27:58.949 "product_name": "Raid Volume", 00:27:58.949 "block_size": 512, 00:27:58.949 "num_blocks": 253952, 00:27:58.949 "uuid": "5447ffc1-1e4a-4ba3-954e-812b89dfcc33", 00:27:58.949 "assigned_rate_limits": { 00:27:58.949 "rw_ios_per_sec": 0, 00:27:58.949 "rw_mbytes_per_sec": 0, 00:27:58.949 "r_mbytes_per_sec": 0, 00:27:58.949 "w_mbytes_per_sec": 0 00:27:58.949 }, 00:27:58.949 "claimed": false, 00:27:58.949 "zoned": false, 00:27:58.949 "supported_io_types": { 00:27:58.949 "read": true, 00:27:58.949 "write": true, 00:27:58.949 "unmap": true, 00:27:58.949 "flush": true, 00:27:58.949 "reset": true, 00:27:58.949 "nvme_admin": false, 00:27:58.949 "nvme_io": false, 00:27:58.949 "nvme_io_md": false, 00:27:58.949 "write_zeroes": true, 00:27:58.949 "zcopy": false, 00:27:58.949 "get_zone_info": false, 00:27:58.949 "zone_management": false, 00:27:58.949 "zone_append": false, 00:27:58.949 "compare": false, 00:27:58.949 "compare_and_write": false, 00:27:58.949 "abort": false, 00:27:58.949 "seek_hole": false, 00:27:58.949 "seek_data": false, 00:27:58.949 "copy": false, 00:27:58.949 "nvme_iov_md": false 00:27:58.949 }, 00:27:58.949 "memory_domains": [ 00:27:58.949 { 00:27:58.949 "dma_device_id": "system", 00:27:58.949 "dma_device_type": 1 00:27:58.949 }, 00:27:58.949 { 00:27:58.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:58.949 "dma_device_type": 2 00:27:58.949 }, 00:27:58.949 { 00:27:58.949 "dma_device_id": "system", 00:27:58.949 "dma_device_type": 1 00:27:58.949 }, 00:27:58.949 { 00:27:58.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:58.949 "dma_device_type": 2 00:27:58.949 }, 00:27:58.949 { 00:27:58.949 "dma_device_id": "system", 00:27:58.949 "dma_device_type": 1 00:27:58.949 }, 00:27:58.949 { 00:27:58.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:58.949 "dma_device_type": 2 00:27:58.949 }, 00:27:58.949 { 00:27:58.949 "dma_device_id": "system", 00:27:58.949 "dma_device_type": 1 00:27:58.949 }, 00:27:58.949 { 00:27:58.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:58.949 "dma_device_type": 2 00:27:58.949 } 00:27:58.949 ], 00:27:58.949 "driver_specific": { 00:27:58.949 "raid": { 00:27:58.949 "uuid": "5447ffc1-1e4a-4ba3-954e-812b89dfcc33", 00:27:58.949 "strip_size_kb": 64, 00:27:58.949 "state": "online", 00:27:58.950 "raid_level": "concat", 00:27:58.950 "superblock": true, 00:27:58.950 "num_base_bdevs": 4, 00:27:58.950 "num_base_bdevs_discovered": 4, 00:27:58.950 "num_base_bdevs_operational": 4, 00:27:58.950 "base_bdevs_list": [ 00:27:58.950 { 00:27:58.950 "name": "pt1", 00:27:58.950 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:58.950 "is_configured": true, 00:27:58.950 "data_offset": 2048, 00:27:58.950 "data_size": 63488 00:27:58.950 }, 00:27:58.950 { 00:27:58.950 "name": "pt2", 00:27:58.950 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:58.950 "is_configured": true, 00:27:58.950 "data_offset": 2048, 00:27:58.950 "data_size": 63488 00:27:58.950 }, 00:27:58.950 { 00:27:58.950 "name": "pt3", 00:27:58.950 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:58.950 "is_configured": true, 00:27:58.950 "data_offset": 2048, 00:27:58.950 "data_size": 63488 00:27:58.950 }, 00:27:58.950 { 00:27:58.950 "name": "pt4", 00:27:58.950 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:58.950 "is_configured": true, 00:27:58.950 "data_offset": 2048, 00:27:58.950 "data_size": 63488 00:27:58.950 } 00:27:58.950 ] 00:27:58.950 } 00:27:58.950 } 00:27:58.950 }' 00:27:58.950 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:58.950 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:27:58.950 pt2 00:27:58.950 pt3 00:27:58.950 pt4' 00:27:58.950 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:27:59.207 [2024-11-05 15:57:31.568018] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5447ffc1-1e4a-4ba3-954e-812b89dfcc33 '!=' 5447ffc1-1e4a-4ba3-954e-812b89dfcc33 ']' 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70520 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 70520 ']' 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 70520 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70520 00:27:59.207 killing process with pid 70520 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70520' 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 70520 00:27:59.207 15:57:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 70520 00:27:59.207 [2024-11-05 15:57:31.616951] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:59.207 [2024-11-05 15:57:31.617014] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:59.207 [2024-11-05 15:57:31.617074] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:59.207 [2024-11-05 15:57:31.617110] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:27:59.465 [2024-11-05 15:57:31.810167] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:00.069 ************************************ 00:28:00.069 END TEST raid_superblock_test 00:28:00.069 ************************************ 00:28:00.069 15:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:28:00.069 00:28:00.069 real 0m3.810s 00:28:00.069 user 0m5.575s 00:28:00.069 sys 0m0.603s 00:28:00.069 15:57:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:00.069 15:57:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:00.069 15:57:32 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:28:00.069 15:57:32 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:28:00.069 15:57:32 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:00.069 15:57:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:00.069 ************************************ 00:28:00.069 START TEST raid_read_error_test 00:28:00.069 ************************************ 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 read 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:28:00.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.oD83rnmZsO 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70762 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70762 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 70762 ']' 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:00.069 15:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:28:00.069 [2024-11-05 15:57:32.479588] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:28:00.069 [2024-11-05 15:57:32.479700] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70762 ] 00:28:00.326 [2024-11-05 15:57:32.638958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.326 [2024-11-05 15:57:32.735410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.583 [2024-11-05 15:57:32.869576] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:00.583 [2024-11-05 15:57:32.869615] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:01.147 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:01.147 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:28:01.147 15:57:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:01.148 BaseBdev1_malloc 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:01.148 true 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:01.148 [2024-11-05 15:57:33.361362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:28:01.148 [2024-11-05 15:57:33.361530] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:01.148 [2024-11-05 15:57:33.361556] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:28:01.148 [2024-11-05 15:57:33.361567] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:01.148 [2024-11-05 15:57:33.363856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:01.148 [2024-11-05 15:57:33.363973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:01.148 BaseBdev1 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:01.148 BaseBdev2_malloc 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:01.148 true 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:01.148 [2024-11-05 15:57:33.405130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:28:01.148 [2024-11-05 15:57:33.405179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:01.148 [2024-11-05 15:57:33.405195] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:28:01.148 [2024-11-05 15:57:33.405205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:01.148 [2024-11-05 15:57:33.407304] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:01.148 [2024-11-05 15:57:33.407337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:01.148 BaseBdev2 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:01.148 BaseBdev3_malloc 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:01.148 true 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:01.148 [2024-11-05 15:57:33.460803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:28:01.148 [2024-11-05 15:57:33.460871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:01.148 [2024-11-05 15:57:33.460892] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:28:01.148 [2024-11-05 15:57:33.460904] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:01.148 [2024-11-05 15:57:33.463030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:01.148 [2024-11-05 15:57:33.463065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:28:01.148 BaseBdev3 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:01.148 BaseBdev4_malloc 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:01.148 true 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:01.148 [2024-11-05 15:57:33.504322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:28:01.148 [2024-11-05 15:57:33.504369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:01.148 [2024-11-05 15:57:33.504385] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:28:01.148 [2024-11-05 15:57:33.504395] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:01.148 [2024-11-05 15:57:33.506446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:01.148 [2024-11-05 15:57:33.506481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:28:01.148 BaseBdev4 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:01.148 [2024-11-05 15:57:33.512390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:01.148 [2024-11-05 15:57:33.514182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:01.148 [2024-11-05 15:57:33.514268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:01.148 [2024-11-05 15:57:33.514335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:01.148 [2024-11-05 15:57:33.514563] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:28:01.148 [2024-11-05 15:57:33.514583] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:28:01.148 [2024-11-05 15:57:33.514818] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:28:01.148 [2024-11-05 15:57:33.514975] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:28:01.148 [2024-11-05 15:57:33.514991] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:28:01.148 [2024-11-05 15:57:33.515131] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.148 15:57:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:01.148 "name": "raid_bdev1", 00:28:01.148 "uuid": "9a77770c-a2aa-48c2-b54b-56d0861e5b4c", 00:28:01.148 "strip_size_kb": 64, 00:28:01.148 "state": "online", 00:28:01.148 "raid_level": "concat", 00:28:01.148 "superblock": true, 00:28:01.149 "num_base_bdevs": 4, 00:28:01.149 "num_base_bdevs_discovered": 4, 00:28:01.149 "num_base_bdevs_operational": 4, 00:28:01.149 "base_bdevs_list": [ 00:28:01.149 { 00:28:01.149 "name": "BaseBdev1", 00:28:01.149 "uuid": "f9e3e913-d216-5113-8dd3-d46a8530811d", 00:28:01.149 "is_configured": true, 00:28:01.149 "data_offset": 2048, 00:28:01.149 "data_size": 63488 00:28:01.149 }, 00:28:01.149 { 00:28:01.149 "name": "BaseBdev2", 00:28:01.149 "uuid": "d69d48ef-d63c-57d2-a168-57c445fbdf16", 00:28:01.149 "is_configured": true, 00:28:01.149 "data_offset": 2048, 00:28:01.149 "data_size": 63488 00:28:01.149 }, 00:28:01.149 { 00:28:01.149 "name": "BaseBdev3", 00:28:01.149 "uuid": "b85d1f90-8a63-564e-b231-f2066d076f47", 00:28:01.149 "is_configured": true, 00:28:01.149 "data_offset": 2048, 00:28:01.149 "data_size": 63488 00:28:01.149 }, 00:28:01.149 { 00:28:01.149 "name": "BaseBdev4", 00:28:01.149 "uuid": "094688aa-6fa0-555d-911b-25fe78a7ad33", 00:28:01.149 "is_configured": true, 00:28:01.149 "data_offset": 2048, 00:28:01.149 "data_size": 63488 00:28:01.149 } 00:28:01.149 ] 00:28:01.149 }' 00:28:01.149 15:57:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:01.149 15:57:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:01.406 15:57:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:28:01.406 15:57:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:28:01.663 [2024-11-05 15:57:33.889396] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:28:02.594 15:57:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:28:02.594 15:57:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.594 15:57:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:02.594 15:57:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.594 15:57:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:28:02.594 15:57:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:28:02.594 15:57:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:28:02.594 15:57:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:28:02.594 15:57:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:02.594 15:57:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:02.594 15:57:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:02.594 15:57:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:02.594 15:57:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:02.594 15:57:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:02.594 15:57:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:02.594 15:57:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:02.594 15:57:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:02.594 15:57:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:02.594 15:57:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:02.594 15:57:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.594 15:57:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:02.594 15:57:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.594 15:57:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:02.594 "name": "raid_bdev1", 00:28:02.594 "uuid": "9a77770c-a2aa-48c2-b54b-56d0861e5b4c", 00:28:02.594 "strip_size_kb": 64, 00:28:02.594 "state": "online", 00:28:02.594 "raid_level": "concat", 00:28:02.594 "superblock": true, 00:28:02.594 "num_base_bdevs": 4, 00:28:02.594 "num_base_bdevs_discovered": 4, 00:28:02.594 "num_base_bdevs_operational": 4, 00:28:02.594 "base_bdevs_list": [ 00:28:02.594 { 00:28:02.594 "name": "BaseBdev1", 00:28:02.594 "uuid": "f9e3e913-d216-5113-8dd3-d46a8530811d", 00:28:02.594 "is_configured": true, 00:28:02.594 "data_offset": 2048, 00:28:02.594 "data_size": 63488 00:28:02.594 }, 00:28:02.594 { 00:28:02.594 "name": "BaseBdev2", 00:28:02.594 "uuid": "d69d48ef-d63c-57d2-a168-57c445fbdf16", 00:28:02.594 "is_configured": true, 00:28:02.594 "data_offset": 2048, 00:28:02.594 "data_size": 63488 00:28:02.594 }, 00:28:02.594 { 00:28:02.594 "name": "BaseBdev3", 00:28:02.594 "uuid": "b85d1f90-8a63-564e-b231-f2066d076f47", 00:28:02.594 "is_configured": true, 00:28:02.594 "data_offset": 2048, 00:28:02.594 "data_size": 63488 00:28:02.594 }, 00:28:02.594 { 00:28:02.594 "name": "BaseBdev4", 00:28:02.594 "uuid": "094688aa-6fa0-555d-911b-25fe78a7ad33", 00:28:02.594 "is_configured": true, 00:28:02.594 "data_offset": 2048, 00:28:02.594 "data_size": 63488 00:28:02.594 } 00:28:02.594 ] 00:28:02.594 }' 00:28:02.594 15:57:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:02.594 15:57:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:02.852 15:57:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:02.852 15:57:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.852 15:57:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:02.852 [2024-11-05 15:57:35.151324] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:02.852 [2024-11-05 15:57:35.151355] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:02.852 [2024-11-05 15:57:35.154358] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:02.852 [2024-11-05 15:57:35.154418] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:02.852 [2024-11-05 15:57:35.154479] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:02.852 [2024-11-05 15:57:35.154493] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:28:02.852 { 00:28:02.852 "results": [ 00:28:02.852 { 00:28:02.852 "job": "raid_bdev1", 00:28:02.852 "core_mask": "0x1", 00:28:02.852 "workload": "randrw", 00:28:02.852 "percentage": 50, 00:28:02.852 "status": "finished", 00:28:02.852 "queue_depth": 1, 00:28:02.852 "io_size": 131072, 00:28:02.852 "runtime": 1.260042, 00:28:02.852 "iops": 15018.547000814258, 00:28:02.852 "mibps": 1877.3183751017823, 00:28:02.852 "io_failed": 1, 00:28:02.852 "io_timeout": 0, 00:28:02.852 "avg_latency_us": 91.04703343156183, 00:28:02.852 "min_latency_us": 33.28, 00:28:02.852 "max_latency_us": 1676.2092307692308 00:28:02.852 } 00:28:02.852 ], 00:28:02.852 "core_count": 1 00:28:02.852 } 00:28:02.852 15:57:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.852 15:57:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70762 00:28:02.852 15:57:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 70762 ']' 00:28:02.852 15:57:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 70762 00:28:02.852 15:57:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:28:02.852 15:57:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:02.852 15:57:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70762 00:28:02.852 killing process with pid 70762 00:28:02.852 15:57:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:02.852 15:57:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:02.852 15:57:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70762' 00:28:02.852 15:57:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 70762 00:28:02.852 15:57:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 70762 00:28:02.852 [2024-11-05 15:57:35.179067] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:03.117 [2024-11-05 15:57:35.378871] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:03.874 15:57:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.oD83rnmZsO 00:28:03.874 15:57:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:28:03.874 15:57:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:28:03.874 15:57:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.79 00:28:03.874 15:57:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:28:03.874 15:57:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:28:03.874 15:57:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:28:03.874 15:57:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.79 != \0\.\0\0 ]] 00:28:03.874 00:28:03.874 real 0m3.580s 00:28:03.874 user 0m4.219s 00:28:03.874 sys 0m0.407s 00:28:03.874 15:57:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:03.874 15:57:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:03.874 ************************************ 00:28:03.874 END TEST raid_read_error_test 00:28:03.874 ************************************ 00:28:03.874 15:57:36 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:28:03.874 15:57:36 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:28:03.874 15:57:36 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:03.874 15:57:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:03.874 ************************************ 00:28:03.874 START TEST raid_write_error_test 00:28:03.874 ************************************ 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 write 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Bc2v7EqmoB 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70897 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70897 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 70897 ']' 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:03.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:03.874 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:28:03.874 [2024-11-05 15:57:36.090798] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:28:03.874 [2024-11-05 15:57:36.090899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70897 ] 00:28:03.874 [2024-11-05 15:57:36.241058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.131 [2024-11-05 15:57:36.322943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.131 [2024-11-05 15:57:36.431239] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:04.131 [2024-11-05 15:57:36.431285] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:04.697 BaseBdev1_malloc 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:04.697 true 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:04.697 [2024-11-05 15:57:36.883250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:28:04.697 [2024-11-05 15:57:36.883293] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:04.697 [2024-11-05 15:57:36.883308] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:28:04.697 [2024-11-05 15:57:36.883316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:04.697 [2024-11-05 15:57:36.885039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:04.697 [2024-11-05 15:57:36.885065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:04.697 BaseBdev1 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:04.697 BaseBdev2_malloc 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:04.697 true 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:04.697 [2024-11-05 15:57:36.922692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:28:04.697 [2024-11-05 15:57:36.922727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:04.697 [2024-11-05 15:57:36.922740] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:28:04.697 [2024-11-05 15:57:36.922749] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:04.697 [2024-11-05 15:57:36.924442] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:04.697 [2024-11-05 15:57:36.924467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:04.697 BaseBdev2 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:04.697 BaseBdev3_malloc 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:04.697 true 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:04.697 [2024-11-05 15:57:36.975331] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:28:04.697 [2024-11-05 15:57:36.975370] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:04.697 [2024-11-05 15:57:36.975384] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:28:04.697 [2024-11-05 15:57:36.975393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:04.697 [2024-11-05 15:57:36.977105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:04.697 [2024-11-05 15:57:36.977130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:28:04.697 BaseBdev3 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.697 15:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:04.697 BaseBdev4_malloc 00:28:04.697 15:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.698 15:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:28:04.698 15:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.698 15:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:04.698 true 00:28:04.698 15:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.698 15:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:28:04.698 15:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.698 15:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:04.698 [2024-11-05 15:57:37.014660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:28:04.698 [2024-11-05 15:57:37.014692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:04.698 [2024-11-05 15:57:37.014704] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:28:04.698 [2024-11-05 15:57:37.014712] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:04.698 [2024-11-05 15:57:37.016398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:04.698 [2024-11-05 15:57:37.016425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:28:04.698 BaseBdev4 00:28:04.698 15:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.698 15:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:28:04.698 15:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.698 15:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:04.698 [2024-11-05 15:57:37.022718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:04.698 [2024-11-05 15:57:37.024224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:04.698 [2024-11-05 15:57:37.024287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:04.698 [2024-11-05 15:57:37.024340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:04.698 [2024-11-05 15:57:37.024520] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:28:04.698 [2024-11-05 15:57:37.024536] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:28:04.698 [2024-11-05 15:57:37.024726] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:28:04.698 [2024-11-05 15:57:37.024855] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:28:04.698 [2024-11-05 15:57:37.024868] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:28:04.698 [2024-11-05 15:57:37.024979] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:04.698 15:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.698 15:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:28:04.698 15:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:04.698 15:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:04.698 15:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:04.698 15:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:04.698 15:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:04.698 15:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:04.698 15:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:04.698 15:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:04.698 15:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:04.698 15:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:04.698 15:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:04.698 15:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.698 15:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:04.698 15:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.698 15:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:04.698 "name": "raid_bdev1", 00:28:04.698 "uuid": "e1fbe5f9-e5c4-4db6-805e-774cb7a2179b", 00:28:04.698 "strip_size_kb": 64, 00:28:04.698 "state": "online", 00:28:04.698 "raid_level": "concat", 00:28:04.698 "superblock": true, 00:28:04.698 "num_base_bdevs": 4, 00:28:04.698 "num_base_bdevs_discovered": 4, 00:28:04.698 "num_base_bdevs_operational": 4, 00:28:04.698 "base_bdevs_list": [ 00:28:04.698 { 00:28:04.698 "name": "BaseBdev1", 00:28:04.698 "uuid": "b9a5dd25-374e-53af-824f-0a173f476a33", 00:28:04.698 "is_configured": true, 00:28:04.698 "data_offset": 2048, 00:28:04.698 "data_size": 63488 00:28:04.698 }, 00:28:04.698 { 00:28:04.698 "name": "BaseBdev2", 00:28:04.698 "uuid": "8ed330d1-8d67-58c8-84ec-447e4c050dc0", 00:28:04.698 "is_configured": true, 00:28:04.698 "data_offset": 2048, 00:28:04.698 "data_size": 63488 00:28:04.698 }, 00:28:04.698 { 00:28:04.698 "name": "BaseBdev3", 00:28:04.698 "uuid": "75803284-4ef0-5126-930b-cc27e7575e8f", 00:28:04.698 "is_configured": true, 00:28:04.698 "data_offset": 2048, 00:28:04.698 "data_size": 63488 00:28:04.698 }, 00:28:04.698 { 00:28:04.698 "name": "BaseBdev4", 00:28:04.698 "uuid": "7f0849b0-7f0f-5920-b501-63a38cb09c33", 00:28:04.698 "is_configured": true, 00:28:04.698 "data_offset": 2048, 00:28:04.698 "data_size": 63488 00:28:04.698 } 00:28:04.698 ] 00:28:04.698 }' 00:28:04.698 15:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:04.698 15:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:04.956 15:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:28:04.956 15:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:28:05.213 [2024-11-05 15:57:37.383558] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:28:06.145 15:57:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:28:06.145 15:57:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.145 15:57:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.145 15:57:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.145 15:57:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:28:06.145 15:57:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:28:06.145 15:57:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:28:06.145 15:57:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:28:06.145 15:57:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:06.146 15:57:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:06.146 15:57:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:06.146 15:57:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:06.146 15:57:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:06.146 15:57:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:06.146 15:57:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:06.146 15:57:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:06.146 15:57:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:06.146 15:57:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:06.146 15:57:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.146 15:57:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.146 15:57:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:06.146 15:57:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.146 15:57:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:06.146 "name": "raid_bdev1", 00:28:06.146 "uuid": "e1fbe5f9-e5c4-4db6-805e-774cb7a2179b", 00:28:06.146 "strip_size_kb": 64, 00:28:06.146 "state": "online", 00:28:06.146 "raid_level": "concat", 00:28:06.146 "superblock": true, 00:28:06.146 "num_base_bdevs": 4, 00:28:06.146 "num_base_bdevs_discovered": 4, 00:28:06.146 "num_base_bdevs_operational": 4, 00:28:06.146 "base_bdevs_list": [ 00:28:06.146 { 00:28:06.146 "name": "BaseBdev1", 00:28:06.146 "uuid": "b9a5dd25-374e-53af-824f-0a173f476a33", 00:28:06.146 "is_configured": true, 00:28:06.146 "data_offset": 2048, 00:28:06.146 "data_size": 63488 00:28:06.146 }, 00:28:06.146 { 00:28:06.146 "name": "BaseBdev2", 00:28:06.146 "uuid": "8ed330d1-8d67-58c8-84ec-447e4c050dc0", 00:28:06.146 "is_configured": true, 00:28:06.146 "data_offset": 2048, 00:28:06.146 "data_size": 63488 00:28:06.146 }, 00:28:06.146 { 00:28:06.146 "name": "BaseBdev3", 00:28:06.146 "uuid": "75803284-4ef0-5126-930b-cc27e7575e8f", 00:28:06.146 "is_configured": true, 00:28:06.146 "data_offset": 2048, 00:28:06.146 "data_size": 63488 00:28:06.146 }, 00:28:06.146 { 00:28:06.146 "name": "BaseBdev4", 00:28:06.146 "uuid": "7f0849b0-7f0f-5920-b501-63a38cb09c33", 00:28:06.146 "is_configured": true, 00:28:06.146 "data_offset": 2048, 00:28:06.146 "data_size": 63488 00:28:06.146 } 00:28:06.146 ] 00:28:06.146 }' 00:28:06.146 15:57:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:06.146 15:57:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.413 15:57:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:06.413 15:57:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.413 15:57:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.413 [2024-11-05 15:57:38.584102] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:06.413 [2024-11-05 15:57:38.584128] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:06.413 [2024-11-05 15:57:38.586480] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:06.413 [2024-11-05 15:57:38.586531] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:06.413 [2024-11-05 15:57:38.586566] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:06.413 [2024-11-05 15:57:38.586581] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:28:06.413 { 00:28:06.413 "results": [ 00:28:06.413 { 00:28:06.413 "job": "raid_bdev1", 00:28:06.413 "core_mask": "0x1", 00:28:06.413 "workload": "randrw", 00:28:06.413 "percentage": 50, 00:28:06.413 "status": "finished", 00:28:06.413 "queue_depth": 1, 00:28:06.413 "io_size": 131072, 00:28:06.413 "runtime": 1.199041, 00:28:06.413 "iops": 18617.378388228593, 00:28:06.413 "mibps": 2327.172298528574, 00:28:06.413 "io_failed": 1, 00:28:06.413 "io_timeout": 0, 00:28:06.413 "avg_latency_us": 73.57094758314612, 00:28:06.413 "min_latency_us": 25.796923076923076, 00:28:06.413 "max_latency_us": 1342.2276923076922 00:28:06.413 } 00:28:06.413 ], 00:28:06.413 "core_count": 1 00:28:06.413 } 00:28:06.413 15:57:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.413 15:57:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70897 00:28:06.413 15:57:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 70897 ']' 00:28:06.413 15:57:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 70897 00:28:06.413 15:57:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:28:06.413 15:57:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:06.413 15:57:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70897 00:28:06.413 15:57:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:06.413 killing process with pid 70897 00:28:06.413 15:57:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:06.413 15:57:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70897' 00:28:06.413 15:57:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 70897 00:28:06.413 15:57:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 70897 00:28:06.413 [2024-11-05 15:57:38.612993] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:06.413 [2024-11-05 15:57:38.771180] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:06.977 15:57:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Bc2v7EqmoB 00:28:06.977 15:57:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:28:06.977 15:57:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:28:06.977 15:57:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.83 00:28:06.977 15:57:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:28:06.977 15:57:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:28:06.977 15:57:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:28:06.977 15:57:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.83 != \0\.\0\0 ]] 00:28:06.977 00:28:06.977 real 0m3.345s 00:28:06.977 user 0m3.885s 00:28:06.977 sys 0m0.341s 00:28:06.977 15:57:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:06.978 15:57:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.978 ************************************ 00:28:06.978 END TEST raid_write_error_test 00:28:06.978 ************************************ 00:28:07.235 15:57:39 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:28:07.235 15:57:39 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:28:07.235 15:57:39 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:28:07.235 15:57:39 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:07.235 15:57:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:07.235 ************************************ 00:28:07.235 START TEST raid_state_function_test 00:28:07.235 ************************************ 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 false 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71029 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:28:07.235 Process raid pid: 71029 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71029' 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71029 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 71029 ']' 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:07.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:07.235 15:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:07.235 [2024-11-05 15:57:39.488595] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:28:07.235 [2024-11-05 15:57:39.488726] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:07.493 [2024-11-05 15:57:39.655873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.493 [2024-11-05 15:57:39.740297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.493 [2024-11-05 15:57:39.851514] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:07.493 [2024-11-05 15:57:39.851554] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:08.057 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:08.057 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:28:08.057 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:28:08.057 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.057 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:08.057 [2024-11-05 15:57:40.326610] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:08.057 [2024-11-05 15:57:40.326656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:08.057 [2024-11-05 15:57:40.326665] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:08.057 [2024-11-05 15:57:40.326672] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:08.057 [2024-11-05 15:57:40.326678] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:08.057 [2024-11-05 15:57:40.326685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:08.057 [2024-11-05 15:57:40.326690] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:08.057 [2024-11-05 15:57:40.326697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:08.057 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.057 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:08.057 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:08.057 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:08.057 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:08.057 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:08.057 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:08.057 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:08.057 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:08.057 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:08.057 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:08.057 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:08.057 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:08.057 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.057 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:08.057 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.057 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:08.057 "name": "Existed_Raid", 00:28:08.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:08.057 "strip_size_kb": 0, 00:28:08.057 "state": "configuring", 00:28:08.057 "raid_level": "raid1", 00:28:08.057 "superblock": false, 00:28:08.057 "num_base_bdevs": 4, 00:28:08.057 "num_base_bdevs_discovered": 0, 00:28:08.057 "num_base_bdevs_operational": 4, 00:28:08.057 "base_bdevs_list": [ 00:28:08.057 { 00:28:08.057 "name": "BaseBdev1", 00:28:08.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:08.057 "is_configured": false, 00:28:08.057 "data_offset": 0, 00:28:08.057 "data_size": 0 00:28:08.057 }, 00:28:08.057 { 00:28:08.057 "name": "BaseBdev2", 00:28:08.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:08.057 "is_configured": false, 00:28:08.057 "data_offset": 0, 00:28:08.057 "data_size": 0 00:28:08.057 }, 00:28:08.057 { 00:28:08.057 "name": "BaseBdev3", 00:28:08.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:08.057 "is_configured": false, 00:28:08.057 "data_offset": 0, 00:28:08.057 "data_size": 0 00:28:08.057 }, 00:28:08.057 { 00:28:08.057 "name": "BaseBdev4", 00:28:08.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:08.057 "is_configured": false, 00:28:08.057 "data_offset": 0, 00:28:08.057 "data_size": 0 00:28:08.057 } 00:28:08.057 ] 00:28:08.057 }' 00:28:08.057 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:08.057 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:08.316 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:28:08.316 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.316 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:08.316 [2024-11-05 15:57:40.618638] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:08.316 [2024-11-05 15:57:40.618672] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:28:08.316 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.316 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:28:08.316 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.316 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:08.316 [2024-11-05 15:57:40.626640] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:08.316 [2024-11-05 15:57:40.626677] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:08.316 [2024-11-05 15:57:40.626684] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:08.316 [2024-11-05 15:57:40.626692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:08.316 [2024-11-05 15:57:40.626697] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:08.316 [2024-11-05 15:57:40.626704] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:08.316 [2024-11-05 15:57:40.626708] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:08.316 [2024-11-05 15:57:40.626716] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:08.316 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.316 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:28:08.316 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.316 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:08.316 [2024-11-05 15:57:40.654696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:08.316 BaseBdev1 00:28:08.316 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.316 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:28:08.316 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:28:08.316 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:28:08.316 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:28:08.316 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:28:08.316 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:28:08.316 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:28:08.316 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.316 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:08.316 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.316 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:08.316 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.316 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:08.316 [ 00:28:08.316 { 00:28:08.316 "name": "BaseBdev1", 00:28:08.316 "aliases": [ 00:28:08.316 "b566da45-7c32-43a3-b032-b146826a4027" 00:28:08.316 ], 00:28:08.316 "product_name": "Malloc disk", 00:28:08.316 "block_size": 512, 00:28:08.316 "num_blocks": 65536, 00:28:08.316 "uuid": "b566da45-7c32-43a3-b032-b146826a4027", 00:28:08.316 "assigned_rate_limits": { 00:28:08.316 "rw_ios_per_sec": 0, 00:28:08.316 "rw_mbytes_per_sec": 0, 00:28:08.316 "r_mbytes_per_sec": 0, 00:28:08.316 "w_mbytes_per_sec": 0 00:28:08.316 }, 00:28:08.316 "claimed": true, 00:28:08.316 "claim_type": "exclusive_write", 00:28:08.316 "zoned": false, 00:28:08.316 "supported_io_types": { 00:28:08.316 "read": true, 00:28:08.316 "write": true, 00:28:08.316 "unmap": true, 00:28:08.316 "flush": true, 00:28:08.316 "reset": true, 00:28:08.316 "nvme_admin": false, 00:28:08.316 "nvme_io": false, 00:28:08.316 "nvme_io_md": false, 00:28:08.316 "write_zeroes": true, 00:28:08.316 "zcopy": true, 00:28:08.316 "get_zone_info": false, 00:28:08.316 "zone_management": false, 00:28:08.316 "zone_append": false, 00:28:08.316 "compare": false, 00:28:08.316 "compare_and_write": false, 00:28:08.316 "abort": true, 00:28:08.316 "seek_hole": false, 00:28:08.316 "seek_data": false, 00:28:08.316 "copy": true, 00:28:08.316 "nvme_iov_md": false 00:28:08.316 }, 00:28:08.316 "memory_domains": [ 00:28:08.316 { 00:28:08.316 "dma_device_id": "system", 00:28:08.316 "dma_device_type": 1 00:28:08.316 }, 00:28:08.316 { 00:28:08.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:08.316 "dma_device_type": 2 00:28:08.316 } 00:28:08.316 ], 00:28:08.316 "driver_specific": {} 00:28:08.316 } 00:28:08.316 ] 00:28:08.316 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.316 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:28:08.317 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:08.317 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:08.317 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:08.317 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:08.317 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:08.317 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:08.317 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:08.317 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:08.317 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:08.317 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:08.317 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:08.317 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:08.317 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.317 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:08.317 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.317 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:08.317 "name": "Existed_Raid", 00:28:08.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:08.317 "strip_size_kb": 0, 00:28:08.317 "state": "configuring", 00:28:08.317 "raid_level": "raid1", 00:28:08.317 "superblock": false, 00:28:08.317 "num_base_bdevs": 4, 00:28:08.317 "num_base_bdevs_discovered": 1, 00:28:08.317 "num_base_bdevs_operational": 4, 00:28:08.317 "base_bdevs_list": [ 00:28:08.317 { 00:28:08.317 "name": "BaseBdev1", 00:28:08.317 "uuid": "b566da45-7c32-43a3-b032-b146826a4027", 00:28:08.317 "is_configured": true, 00:28:08.317 "data_offset": 0, 00:28:08.317 "data_size": 65536 00:28:08.317 }, 00:28:08.317 { 00:28:08.317 "name": "BaseBdev2", 00:28:08.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:08.317 "is_configured": false, 00:28:08.317 "data_offset": 0, 00:28:08.317 "data_size": 0 00:28:08.317 }, 00:28:08.317 { 00:28:08.317 "name": "BaseBdev3", 00:28:08.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:08.317 "is_configured": false, 00:28:08.317 "data_offset": 0, 00:28:08.317 "data_size": 0 00:28:08.317 }, 00:28:08.317 { 00:28:08.317 "name": "BaseBdev4", 00:28:08.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:08.317 "is_configured": false, 00:28:08.317 "data_offset": 0, 00:28:08.317 "data_size": 0 00:28:08.317 } 00:28:08.317 ] 00:28:08.317 }' 00:28:08.317 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:08.317 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:08.575 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:28:08.575 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.575 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:08.575 [2024-11-05 15:57:40.986794] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:08.575 [2024-11-05 15:57:40.986839] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:28:08.575 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.575 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:28:08.575 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.575 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:08.833 [2024-11-05 15:57:40.994838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:08.833 [2024-11-05 15:57:40.996344] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:08.833 [2024-11-05 15:57:40.996381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:08.833 [2024-11-05 15:57:40.996388] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:08.833 [2024-11-05 15:57:40.996397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:08.833 [2024-11-05 15:57:40.996402] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:08.833 [2024-11-05 15:57:40.996408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:08.833 15:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.833 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:28:08.833 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:08.833 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:08.833 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:08.833 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:08.833 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:08.833 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:08.833 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:08.833 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:08.833 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:08.833 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:08.833 15:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:08.833 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:08.833 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:08.833 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.833 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:08.833 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.833 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:08.833 "name": "Existed_Raid", 00:28:08.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:08.833 "strip_size_kb": 0, 00:28:08.833 "state": "configuring", 00:28:08.833 "raid_level": "raid1", 00:28:08.833 "superblock": false, 00:28:08.833 "num_base_bdevs": 4, 00:28:08.833 "num_base_bdevs_discovered": 1, 00:28:08.833 "num_base_bdevs_operational": 4, 00:28:08.833 "base_bdevs_list": [ 00:28:08.833 { 00:28:08.833 "name": "BaseBdev1", 00:28:08.833 "uuid": "b566da45-7c32-43a3-b032-b146826a4027", 00:28:08.833 "is_configured": true, 00:28:08.833 "data_offset": 0, 00:28:08.833 "data_size": 65536 00:28:08.833 }, 00:28:08.833 { 00:28:08.833 "name": "BaseBdev2", 00:28:08.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:08.833 "is_configured": false, 00:28:08.833 "data_offset": 0, 00:28:08.834 "data_size": 0 00:28:08.834 }, 00:28:08.834 { 00:28:08.834 "name": "BaseBdev3", 00:28:08.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:08.834 "is_configured": false, 00:28:08.834 "data_offset": 0, 00:28:08.834 "data_size": 0 00:28:08.834 }, 00:28:08.834 { 00:28:08.834 "name": "BaseBdev4", 00:28:08.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:08.834 "is_configured": false, 00:28:08.834 "data_offset": 0, 00:28:08.834 "data_size": 0 00:28:08.834 } 00:28:08.834 ] 00:28:08.834 }' 00:28:08.834 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:08.834 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.091 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:28:09.091 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.091 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.091 [2024-11-05 15:57:41.337674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:09.091 BaseBdev2 00:28:09.091 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.092 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:28:09.092 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:28:09.092 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:28:09.092 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:28:09.092 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:28:09.092 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:28:09.092 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:28:09.092 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.092 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.092 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.092 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:09.092 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.092 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.092 [ 00:28:09.092 { 00:28:09.092 "name": "BaseBdev2", 00:28:09.092 "aliases": [ 00:28:09.092 "fd3a300d-e453-4558-8226-027c84e6c1c9" 00:28:09.092 ], 00:28:09.092 "product_name": "Malloc disk", 00:28:09.092 "block_size": 512, 00:28:09.092 "num_blocks": 65536, 00:28:09.092 "uuid": "fd3a300d-e453-4558-8226-027c84e6c1c9", 00:28:09.092 "assigned_rate_limits": { 00:28:09.092 "rw_ios_per_sec": 0, 00:28:09.092 "rw_mbytes_per_sec": 0, 00:28:09.092 "r_mbytes_per_sec": 0, 00:28:09.092 "w_mbytes_per_sec": 0 00:28:09.092 }, 00:28:09.092 "claimed": true, 00:28:09.092 "claim_type": "exclusive_write", 00:28:09.092 "zoned": false, 00:28:09.092 "supported_io_types": { 00:28:09.092 "read": true, 00:28:09.092 "write": true, 00:28:09.092 "unmap": true, 00:28:09.092 "flush": true, 00:28:09.092 "reset": true, 00:28:09.092 "nvme_admin": false, 00:28:09.092 "nvme_io": false, 00:28:09.092 "nvme_io_md": false, 00:28:09.092 "write_zeroes": true, 00:28:09.092 "zcopy": true, 00:28:09.092 "get_zone_info": false, 00:28:09.092 "zone_management": false, 00:28:09.092 "zone_append": false, 00:28:09.092 "compare": false, 00:28:09.092 "compare_and_write": false, 00:28:09.092 "abort": true, 00:28:09.092 "seek_hole": false, 00:28:09.092 "seek_data": false, 00:28:09.092 "copy": true, 00:28:09.092 "nvme_iov_md": false 00:28:09.092 }, 00:28:09.092 "memory_domains": [ 00:28:09.092 { 00:28:09.092 "dma_device_id": "system", 00:28:09.092 "dma_device_type": 1 00:28:09.092 }, 00:28:09.092 { 00:28:09.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:09.092 "dma_device_type": 2 00:28:09.092 } 00:28:09.092 ], 00:28:09.092 "driver_specific": {} 00:28:09.092 } 00:28:09.092 ] 00:28:09.092 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.092 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:28:09.092 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:28:09.092 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:09.092 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:09.092 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:09.092 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:09.092 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:09.092 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:09.092 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:09.092 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:09.092 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:09.092 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:09.092 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:09.092 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:09.092 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.092 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.092 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:09.092 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.092 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:09.092 "name": "Existed_Raid", 00:28:09.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:09.092 "strip_size_kb": 0, 00:28:09.092 "state": "configuring", 00:28:09.092 "raid_level": "raid1", 00:28:09.092 "superblock": false, 00:28:09.092 "num_base_bdevs": 4, 00:28:09.092 "num_base_bdevs_discovered": 2, 00:28:09.092 "num_base_bdevs_operational": 4, 00:28:09.092 "base_bdevs_list": [ 00:28:09.092 { 00:28:09.092 "name": "BaseBdev1", 00:28:09.092 "uuid": "b566da45-7c32-43a3-b032-b146826a4027", 00:28:09.092 "is_configured": true, 00:28:09.092 "data_offset": 0, 00:28:09.092 "data_size": 65536 00:28:09.092 }, 00:28:09.092 { 00:28:09.092 "name": "BaseBdev2", 00:28:09.092 "uuid": "fd3a300d-e453-4558-8226-027c84e6c1c9", 00:28:09.092 "is_configured": true, 00:28:09.092 "data_offset": 0, 00:28:09.092 "data_size": 65536 00:28:09.092 }, 00:28:09.092 { 00:28:09.092 "name": "BaseBdev3", 00:28:09.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:09.092 "is_configured": false, 00:28:09.092 "data_offset": 0, 00:28:09.092 "data_size": 0 00:28:09.092 }, 00:28:09.092 { 00:28:09.092 "name": "BaseBdev4", 00:28:09.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:09.092 "is_configured": false, 00:28:09.092 "data_offset": 0, 00:28:09.092 "data_size": 0 00:28:09.092 } 00:28:09.092 ] 00:28:09.092 }' 00:28:09.092 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:09.092 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.350 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:28:09.350 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.350 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.350 [2024-11-05 15:57:41.750489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:09.350 BaseBdev3 00:28:09.350 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.350 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:28:09.350 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:28:09.350 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:28:09.350 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:28:09.350 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:28:09.350 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:28:09.350 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:28:09.350 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.350 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.350 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.350 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:09.350 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.350 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.609 [ 00:28:09.610 { 00:28:09.610 "name": "BaseBdev3", 00:28:09.610 "aliases": [ 00:28:09.610 "8be87bd7-b21c-45ca-be7a-752a1f18f591" 00:28:09.610 ], 00:28:09.610 "product_name": "Malloc disk", 00:28:09.610 "block_size": 512, 00:28:09.610 "num_blocks": 65536, 00:28:09.610 "uuid": "8be87bd7-b21c-45ca-be7a-752a1f18f591", 00:28:09.610 "assigned_rate_limits": { 00:28:09.610 "rw_ios_per_sec": 0, 00:28:09.610 "rw_mbytes_per_sec": 0, 00:28:09.610 "r_mbytes_per_sec": 0, 00:28:09.610 "w_mbytes_per_sec": 0 00:28:09.610 }, 00:28:09.610 "claimed": true, 00:28:09.610 "claim_type": "exclusive_write", 00:28:09.610 "zoned": false, 00:28:09.610 "supported_io_types": { 00:28:09.610 "read": true, 00:28:09.610 "write": true, 00:28:09.610 "unmap": true, 00:28:09.610 "flush": true, 00:28:09.610 "reset": true, 00:28:09.610 "nvme_admin": false, 00:28:09.610 "nvme_io": false, 00:28:09.610 "nvme_io_md": false, 00:28:09.610 "write_zeroes": true, 00:28:09.610 "zcopy": true, 00:28:09.610 "get_zone_info": false, 00:28:09.610 "zone_management": false, 00:28:09.610 "zone_append": false, 00:28:09.610 "compare": false, 00:28:09.610 "compare_and_write": false, 00:28:09.610 "abort": true, 00:28:09.610 "seek_hole": false, 00:28:09.610 "seek_data": false, 00:28:09.610 "copy": true, 00:28:09.610 "nvme_iov_md": false 00:28:09.610 }, 00:28:09.610 "memory_domains": [ 00:28:09.610 { 00:28:09.610 "dma_device_id": "system", 00:28:09.610 "dma_device_type": 1 00:28:09.610 }, 00:28:09.610 { 00:28:09.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:09.610 "dma_device_type": 2 00:28:09.610 } 00:28:09.610 ], 00:28:09.610 "driver_specific": {} 00:28:09.610 } 00:28:09.610 ] 00:28:09.610 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.610 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:28:09.610 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:28:09.610 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:09.610 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:09.610 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:09.610 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:09.610 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:09.610 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:09.610 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:09.610 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:09.610 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:09.610 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:09.610 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:09.610 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:09.610 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:09.610 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.610 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.610 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.610 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:09.610 "name": "Existed_Raid", 00:28:09.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:09.610 "strip_size_kb": 0, 00:28:09.610 "state": "configuring", 00:28:09.610 "raid_level": "raid1", 00:28:09.610 "superblock": false, 00:28:09.610 "num_base_bdevs": 4, 00:28:09.610 "num_base_bdevs_discovered": 3, 00:28:09.610 "num_base_bdevs_operational": 4, 00:28:09.610 "base_bdevs_list": [ 00:28:09.610 { 00:28:09.610 "name": "BaseBdev1", 00:28:09.610 "uuid": "b566da45-7c32-43a3-b032-b146826a4027", 00:28:09.610 "is_configured": true, 00:28:09.610 "data_offset": 0, 00:28:09.610 "data_size": 65536 00:28:09.610 }, 00:28:09.610 { 00:28:09.610 "name": "BaseBdev2", 00:28:09.610 "uuid": "fd3a300d-e453-4558-8226-027c84e6c1c9", 00:28:09.610 "is_configured": true, 00:28:09.610 "data_offset": 0, 00:28:09.610 "data_size": 65536 00:28:09.610 }, 00:28:09.610 { 00:28:09.610 "name": "BaseBdev3", 00:28:09.610 "uuid": "8be87bd7-b21c-45ca-be7a-752a1f18f591", 00:28:09.610 "is_configured": true, 00:28:09.610 "data_offset": 0, 00:28:09.610 "data_size": 65536 00:28:09.610 }, 00:28:09.610 { 00:28:09.610 "name": "BaseBdev4", 00:28:09.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:09.610 "is_configured": false, 00:28:09.610 "data_offset": 0, 00:28:09.610 "data_size": 0 00:28:09.610 } 00:28:09.610 ] 00:28:09.610 }' 00:28:09.610 15:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:09.610 15:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.868 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:28:09.868 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.868 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.868 [2024-11-05 15:57:42.121515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:09.868 [2024-11-05 15:57:42.121568] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:28:09.868 [2024-11-05 15:57:42.121577] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:28:09.868 [2024-11-05 15:57:42.121866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:28:09.868 [2024-11-05 15:57:42.122017] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:28:09.868 [2024-11-05 15:57:42.122033] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:28:09.868 [2024-11-05 15:57:42.122262] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:09.868 BaseBdev4 00:28:09.868 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.868 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:28:09.868 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:28:09.868 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:28:09.868 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:28:09.868 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:28:09.868 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:28:09.868 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:28:09.868 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.868 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.868 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.868 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:28:09.868 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.868 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.868 [ 00:28:09.868 { 00:28:09.868 "name": "BaseBdev4", 00:28:09.868 "aliases": [ 00:28:09.868 "15c3bda5-5f4b-4cbc-b5db-30b9a2074c79" 00:28:09.868 ], 00:28:09.868 "product_name": "Malloc disk", 00:28:09.868 "block_size": 512, 00:28:09.868 "num_blocks": 65536, 00:28:09.868 "uuid": "15c3bda5-5f4b-4cbc-b5db-30b9a2074c79", 00:28:09.868 "assigned_rate_limits": { 00:28:09.868 "rw_ios_per_sec": 0, 00:28:09.868 "rw_mbytes_per_sec": 0, 00:28:09.868 "r_mbytes_per_sec": 0, 00:28:09.868 "w_mbytes_per_sec": 0 00:28:09.868 }, 00:28:09.868 "claimed": true, 00:28:09.868 "claim_type": "exclusive_write", 00:28:09.868 "zoned": false, 00:28:09.868 "supported_io_types": { 00:28:09.868 "read": true, 00:28:09.868 "write": true, 00:28:09.868 "unmap": true, 00:28:09.868 "flush": true, 00:28:09.868 "reset": true, 00:28:09.868 "nvme_admin": false, 00:28:09.868 "nvme_io": false, 00:28:09.868 "nvme_io_md": false, 00:28:09.868 "write_zeroes": true, 00:28:09.868 "zcopy": true, 00:28:09.868 "get_zone_info": false, 00:28:09.868 "zone_management": false, 00:28:09.868 "zone_append": false, 00:28:09.868 "compare": false, 00:28:09.868 "compare_and_write": false, 00:28:09.868 "abort": true, 00:28:09.868 "seek_hole": false, 00:28:09.868 "seek_data": false, 00:28:09.868 "copy": true, 00:28:09.868 "nvme_iov_md": false 00:28:09.868 }, 00:28:09.868 "memory_domains": [ 00:28:09.868 { 00:28:09.868 "dma_device_id": "system", 00:28:09.868 "dma_device_type": 1 00:28:09.868 }, 00:28:09.868 { 00:28:09.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:09.868 "dma_device_type": 2 00:28:09.868 } 00:28:09.868 ], 00:28:09.868 "driver_specific": {} 00:28:09.868 } 00:28:09.868 ] 00:28:09.868 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.868 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:28:09.868 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:28:09.868 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:09.868 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:28:09.868 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:09.868 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:09.868 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:09.868 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:09.868 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:09.868 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:09.868 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:09.868 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:09.868 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:09.868 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:09.868 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:09.868 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.868 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.868 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.868 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:09.868 "name": "Existed_Raid", 00:28:09.868 "uuid": "3f60fc0b-4869-4f31-9066-a1b643656f48", 00:28:09.868 "strip_size_kb": 0, 00:28:09.868 "state": "online", 00:28:09.868 "raid_level": "raid1", 00:28:09.868 "superblock": false, 00:28:09.868 "num_base_bdevs": 4, 00:28:09.868 "num_base_bdevs_discovered": 4, 00:28:09.868 "num_base_bdevs_operational": 4, 00:28:09.868 "base_bdevs_list": [ 00:28:09.869 { 00:28:09.869 "name": "BaseBdev1", 00:28:09.869 "uuid": "b566da45-7c32-43a3-b032-b146826a4027", 00:28:09.869 "is_configured": true, 00:28:09.869 "data_offset": 0, 00:28:09.869 "data_size": 65536 00:28:09.869 }, 00:28:09.869 { 00:28:09.869 "name": "BaseBdev2", 00:28:09.869 "uuid": "fd3a300d-e453-4558-8226-027c84e6c1c9", 00:28:09.869 "is_configured": true, 00:28:09.869 "data_offset": 0, 00:28:09.869 "data_size": 65536 00:28:09.869 }, 00:28:09.869 { 00:28:09.869 "name": "BaseBdev3", 00:28:09.869 "uuid": "8be87bd7-b21c-45ca-be7a-752a1f18f591", 00:28:09.869 "is_configured": true, 00:28:09.869 "data_offset": 0, 00:28:09.869 "data_size": 65536 00:28:09.869 }, 00:28:09.869 { 00:28:09.869 "name": "BaseBdev4", 00:28:09.869 "uuid": "15c3bda5-5f4b-4cbc-b5db-30b9a2074c79", 00:28:09.869 "is_configured": true, 00:28:09.869 "data_offset": 0, 00:28:09.869 "data_size": 65536 00:28:09.869 } 00:28:09.869 ] 00:28:09.869 }' 00:28:09.869 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:09.869 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:10.126 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:28:10.126 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:28:10.126 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:10.126 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:10.126 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:28:10.126 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:10.126 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:28:10.126 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.126 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:10.126 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:10.126 [2024-11-05 15:57:42.482041] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:10.126 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.126 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:10.126 "name": "Existed_Raid", 00:28:10.126 "aliases": [ 00:28:10.126 "3f60fc0b-4869-4f31-9066-a1b643656f48" 00:28:10.126 ], 00:28:10.126 "product_name": "Raid Volume", 00:28:10.126 "block_size": 512, 00:28:10.126 "num_blocks": 65536, 00:28:10.126 "uuid": "3f60fc0b-4869-4f31-9066-a1b643656f48", 00:28:10.126 "assigned_rate_limits": { 00:28:10.126 "rw_ios_per_sec": 0, 00:28:10.126 "rw_mbytes_per_sec": 0, 00:28:10.126 "r_mbytes_per_sec": 0, 00:28:10.126 "w_mbytes_per_sec": 0 00:28:10.126 }, 00:28:10.126 "claimed": false, 00:28:10.126 "zoned": false, 00:28:10.127 "supported_io_types": { 00:28:10.127 "read": true, 00:28:10.127 "write": true, 00:28:10.127 "unmap": false, 00:28:10.127 "flush": false, 00:28:10.127 "reset": true, 00:28:10.127 "nvme_admin": false, 00:28:10.127 "nvme_io": false, 00:28:10.127 "nvme_io_md": false, 00:28:10.127 "write_zeroes": true, 00:28:10.127 "zcopy": false, 00:28:10.127 "get_zone_info": false, 00:28:10.127 "zone_management": false, 00:28:10.127 "zone_append": false, 00:28:10.127 "compare": false, 00:28:10.127 "compare_and_write": false, 00:28:10.127 "abort": false, 00:28:10.127 "seek_hole": false, 00:28:10.127 "seek_data": false, 00:28:10.127 "copy": false, 00:28:10.127 "nvme_iov_md": false 00:28:10.127 }, 00:28:10.127 "memory_domains": [ 00:28:10.127 { 00:28:10.127 "dma_device_id": "system", 00:28:10.127 "dma_device_type": 1 00:28:10.127 }, 00:28:10.127 { 00:28:10.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:10.127 "dma_device_type": 2 00:28:10.127 }, 00:28:10.127 { 00:28:10.127 "dma_device_id": "system", 00:28:10.127 "dma_device_type": 1 00:28:10.127 }, 00:28:10.127 { 00:28:10.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:10.127 "dma_device_type": 2 00:28:10.127 }, 00:28:10.127 { 00:28:10.127 "dma_device_id": "system", 00:28:10.127 "dma_device_type": 1 00:28:10.127 }, 00:28:10.127 { 00:28:10.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:10.127 "dma_device_type": 2 00:28:10.127 }, 00:28:10.127 { 00:28:10.127 "dma_device_id": "system", 00:28:10.127 "dma_device_type": 1 00:28:10.127 }, 00:28:10.127 { 00:28:10.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:10.127 "dma_device_type": 2 00:28:10.127 } 00:28:10.127 ], 00:28:10.127 "driver_specific": { 00:28:10.127 "raid": { 00:28:10.127 "uuid": "3f60fc0b-4869-4f31-9066-a1b643656f48", 00:28:10.127 "strip_size_kb": 0, 00:28:10.127 "state": "online", 00:28:10.127 "raid_level": "raid1", 00:28:10.127 "superblock": false, 00:28:10.127 "num_base_bdevs": 4, 00:28:10.127 "num_base_bdevs_discovered": 4, 00:28:10.127 "num_base_bdevs_operational": 4, 00:28:10.127 "base_bdevs_list": [ 00:28:10.127 { 00:28:10.127 "name": "BaseBdev1", 00:28:10.127 "uuid": "b566da45-7c32-43a3-b032-b146826a4027", 00:28:10.127 "is_configured": true, 00:28:10.127 "data_offset": 0, 00:28:10.127 "data_size": 65536 00:28:10.127 }, 00:28:10.127 { 00:28:10.127 "name": "BaseBdev2", 00:28:10.127 "uuid": "fd3a300d-e453-4558-8226-027c84e6c1c9", 00:28:10.127 "is_configured": true, 00:28:10.127 "data_offset": 0, 00:28:10.127 "data_size": 65536 00:28:10.127 }, 00:28:10.127 { 00:28:10.127 "name": "BaseBdev3", 00:28:10.127 "uuid": "8be87bd7-b21c-45ca-be7a-752a1f18f591", 00:28:10.127 "is_configured": true, 00:28:10.127 "data_offset": 0, 00:28:10.127 "data_size": 65536 00:28:10.127 }, 00:28:10.127 { 00:28:10.127 "name": "BaseBdev4", 00:28:10.127 "uuid": "15c3bda5-5f4b-4cbc-b5db-30b9a2074c79", 00:28:10.127 "is_configured": true, 00:28:10.127 "data_offset": 0, 00:28:10.127 "data_size": 65536 00:28:10.127 } 00:28:10.127 ] 00:28:10.127 } 00:28:10.127 } 00:28:10.127 }' 00:28:10.127 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:10.407 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:28:10.408 BaseBdev2 00:28:10.408 BaseBdev3 00:28:10.408 BaseBdev4' 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:10.408 [2024-11-05 15:57:42.717777] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:10.408 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.667 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:10.667 "name": "Existed_Raid", 00:28:10.667 "uuid": "3f60fc0b-4869-4f31-9066-a1b643656f48", 00:28:10.667 "strip_size_kb": 0, 00:28:10.667 "state": "online", 00:28:10.667 "raid_level": "raid1", 00:28:10.667 "superblock": false, 00:28:10.667 "num_base_bdevs": 4, 00:28:10.667 "num_base_bdevs_discovered": 3, 00:28:10.667 "num_base_bdevs_operational": 3, 00:28:10.667 "base_bdevs_list": [ 00:28:10.667 { 00:28:10.667 "name": null, 00:28:10.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:10.667 "is_configured": false, 00:28:10.667 "data_offset": 0, 00:28:10.667 "data_size": 65536 00:28:10.667 }, 00:28:10.667 { 00:28:10.667 "name": "BaseBdev2", 00:28:10.667 "uuid": "fd3a300d-e453-4558-8226-027c84e6c1c9", 00:28:10.667 "is_configured": true, 00:28:10.667 "data_offset": 0, 00:28:10.667 "data_size": 65536 00:28:10.667 }, 00:28:10.667 { 00:28:10.667 "name": "BaseBdev3", 00:28:10.667 "uuid": "8be87bd7-b21c-45ca-be7a-752a1f18f591", 00:28:10.667 "is_configured": true, 00:28:10.667 "data_offset": 0, 00:28:10.667 "data_size": 65536 00:28:10.667 }, 00:28:10.667 { 00:28:10.667 "name": "BaseBdev4", 00:28:10.667 "uuid": "15c3bda5-5f4b-4cbc-b5db-30b9a2074c79", 00:28:10.667 "is_configured": true, 00:28:10.667 "data_offset": 0, 00:28:10.667 "data_size": 65536 00:28:10.667 } 00:28:10.667 ] 00:28:10.667 }' 00:28:10.667 15:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:10.667 15:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:10.667 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:28:10.667 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:10.667 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:10.667 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:28:10.667 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.667 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:10.667 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.927 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:28:10.927 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:10.927 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:28:10.927 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.927 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:10.927 [2024-11-05 15:57:43.092673] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:10.927 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.927 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:28:10.927 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:10.927 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:10.927 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.927 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:10.927 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:28:10.927 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.927 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:28:10.927 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:10.927 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:28:10.927 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.927 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:10.927 [2024-11-05 15:57:43.183498] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:10.927 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.927 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:28:10.927 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:10.927 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:10.927 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.927 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:10.927 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:28:10.927 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.927 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:28:10.927 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:10.927 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:28:10.927 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.927 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:10.927 [2024-11-05 15:57:43.282583] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:28:10.927 [2024-11-05 15:57:43.282773] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:10.927 [2024-11-05 15:57:43.341234] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:10.927 [2024-11-05 15:57:43.341282] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:10.927 [2024-11-05 15:57:43.341293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:28:10.927 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.927 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:28:10.927 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.201 BaseBdev2 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.201 [ 00:28:11.201 { 00:28:11.201 "name": "BaseBdev2", 00:28:11.201 "aliases": [ 00:28:11.201 "d11631ae-3a8f-41f8-9c1e-0c08216604c5" 00:28:11.201 ], 00:28:11.201 "product_name": "Malloc disk", 00:28:11.201 "block_size": 512, 00:28:11.201 "num_blocks": 65536, 00:28:11.201 "uuid": "d11631ae-3a8f-41f8-9c1e-0c08216604c5", 00:28:11.201 "assigned_rate_limits": { 00:28:11.201 "rw_ios_per_sec": 0, 00:28:11.201 "rw_mbytes_per_sec": 0, 00:28:11.201 "r_mbytes_per_sec": 0, 00:28:11.201 "w_mbytes_per_sec": 0 00:28:11.201 }, 00:28:11.201 "claimed": false, 00:28:11.201 "zoned": false, 00:28:11.201 "supported_io_types": { 00:28:11.201 "read": true, 00:28:11.201 "write": true, 00:28:11.201 "unmap": true, 00:28:11.201 "flush": true, 00:28:11.201 "reset": true, 00:28:11.201 "nvme_admin": false, 00:28:11.201 "nvme_io": false, 00:28:11.201 "nvme_io_md": false, 00:28:11.201 "write_zeroes": true, 00:28:11.201 "zcopy": true, 00:28:11.201 "get_zone_info": false, 00:28:11.201 "zone_management": false, 00:28:11.201 "zone_append": false, 00:28:11.201 "compare": false, 00:28:11.201 "compare_and_write": false, 00:28:11.201 "abort": true, 00:28:11.201 "seek_hole": false, 00:28:11.201 "seek_data": false, 00:28:11.201 "copy": true, 00:28:11.201 "nvme_iov_md": false 00:28:11.201 }, 00:28:11.201 "memory_domains": [ 00:28:11.201 { 00:28:11.201 "dma_device_id": "system", 00:28:11.201 "dma_device_type": 1 00:28:11.201 }, 00:28:11.201 { 00:28:11.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:11.201 "dma_device_type": 2 00:28:11.201 } 00:28:11.201 ], 00:28:11.201 "driver_specific": {} 00:28:11.201 } 00:28:11.201 ] 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.201 BaseBdev3 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.201 [ 00:28:11.201 { 00:28:11.201 "name": "BaseBdev3", 00:28:11.201 "aliases": [ 00:28:11.201 "1121336d-00e6-4f49-b74a-a7f3d361d6b3" 00:28:11.201 ], 00:28:11.201 "product_name": "Malloc disk", 00:28:11.201 "block_size": 512, 00:28:11.201 "num_blocks": 65536, 00:28:11.201 "uuid": "1121336d-00e6-4f49-b74a-a7f3d361d6b3", 00:28:11.201 "assigned_rate_limits": { 00:28:11.201 "rw_ios_per_sec": 0, 00:28:11.201 "rw_mbytes_per_sec": 0, 00:28:11.201 "r_mbytes_per_sec": 0, 00:28:11.201 "w_mbytes_per_sec": 0 00:28:11.201 }, 00:28:11.201 "claimed": false, 00:28:11.201 "zoned": false, 00:28:11.201 "supported_io_types": { 00:28:11.201 "read": true, 00:28:11.201 "write": true, 00:28:11.201 "unmap": true, 00:28:11.201 "flush": true, 00:28:11.201 "reset": true, 00:28:11.201 "nvme_admin": false, 00:28:11.201 "nvme_io": false, 00:28:11.201 "nvme_io_md": false, 00:28:11.201 "write_zeroes": true, 00:28:11.201 "zcopy": true, 00:28:11.201 "get_zone_info": false, 00:28:11.201 "zone_management": false, 00:28:11.201 "zone_append": false, 00:28:11.201 "compare": false, 00:28:11.201 "compare_and_write": false, 00:28:11.201 "abort": true, 00:28:11.201 "seek_hole": false, 00:28:11.201 "seek_data": false, 00:28:11.201 "copy": true, 00:28:11.201 "nvme_iov_md": false 00:28:11.201 }, 00:28:11.201 "memory_domains": [ 00:28:11.201 { 00:28:11.201 "dma_device_id": "system", 00:28:11.201 "dma_device_type": 1 00:28:11.201 }, 00:28:11.201 { 00:28:11.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:11.201 "dma_device_type": 2 00:28:11.201 } 00:28:11.201 ], 00:28:11.201 "driver_specific": {} 00:28:11.201 } 00:28:11.201 ] 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.201 BaseBdev4 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.201 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.201 [ 00:28:11.201 { 00:28:11.201 "name": "BaseBdev4", 00:28:11.201 "aliases": [ 00:28:11.201 "da08c4d4-3b08-42c3-9438-57a87d16cae8" 00:28:11.201 ], 00:28:11.202 "product_name": "Malloc disk", 00:28:11.202 "block_size": 512, 00:28:11.202 "num_blocks": 65536, 00:28:11.202 "uuid": "da08c4d4-3b08-42c3-9438-57a87d16cae8", 00:28:11.202 "assigned_rate_limits": { 00:28:11.202 "rw_ios_per_sec": 0, 00:28:11.202 "rw_mbytes_per_sec": 0, 00:28:11.202 "r_mbytes_per_sec": 0, 00:28:11.202 "w_mbytes_per_sec": 0 00:28:11.202 }, 00:28:11.202 "claimed": false, 00:28:11.202 "zoned": false, 00:28:11.202 "supported_io_types": { 00:28:11.202 "read": true, 00:28:11.202 "write": true, 00:28:11.202 "unmap": true, 00:28:11.202 "flush": true, 00:28:11.202 "reset": true, 00:28:11.202 "nvme_admin": false, 00:28:11.202 "nvme_io": false, 00:28:11.202 "nvme_io_md": false, 00:28:11.202 "write_zeroes": true, 00:28:11.202 "zcopy": true, 00:28:11.202 "get_zone_info": false, 00:28:11.202 "zone_management": false, 00:28:11.202 "zone_append": false, 00:28:11.202 "compare": false, 00:28:11.202 "compare_and_write": false, 00:28:11.202 "abort": true, 00:28:11.202 "seek_hole": false, 00:28:11.202 "seek_data": false, 00:28:11.202 "copy": true, 00:28:11.202 "nvme_iov_md": false 00:28:11.202 }, 00:28:11.202 "memory_domains": [ 00:28:11.202 { 00:28:11.202 "dma_device_id": "system", 00:28:11.202 "dma_device_type": 1 00:28:11.202 }, 00:28:11.202 { 00:28:11.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:11.202 "dma_device_type": 2 00:28:11.202 } 00:28:11.202 ], 00:28:11.202 "driver_specific": {} 00:28:11.202 } 00:28:11.202 ] 00:28:11.202 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.202 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:28:11.202 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:28:11.202 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:11.202 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:28:11.202 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.202 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.202 [2024-11-05 15:57:43.537647] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:11.202 [2024-11-05 15:57:43.537792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:11.202 [2024-11-05 15:57:43.537880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:11.202 [2024-11-05 15:57:43.539746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:11.202 [2024-11-05 15:57:43.539883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:11.202 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.202 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:11.202 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:11.202 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:11.202 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:11.202 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:11.202 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:11.202 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:11.202 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:11.202 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:11.202 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:11.202 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:11.202 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:11.202 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.202 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.202 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.202 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:11.202 "name": "Existed_Raid", 00:28:11.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:11.202 "strip_size_kb": 0, 00:28:11.202 "state": "configuring", 00:28:11.202 "raid_level": "raid1", 00:28:11.202 "superblock": false, 00:28:11.202 "num_base_bdevs": 4, 00:28:11.202 "num_base_bdevs_discovered": 3, 00:28:11.202 "num_base_bdevs_operational": 4, 00:28:11.202 "base_bdevs_list": [ 00:28:11.202 { 00:28:11.202 "name": "BaseBdev1", 00:28:11.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:11.202 "is_configured": false, 00:28:11.202 "data_offset": 0, 00:28:11.202 "data_size": 0 00:28:11.202 }, 00:28:11.202 { 00:28:11.202 "name": "BaseBdev2", 00:28:11.202 "uuid": "d11631ae-3a8f-41f8-9c1e-0c08216604c5", 00:28:11.202 "is_configured": true, 00:28:11.202 "data_offset": 0, 00:28:11.202 "data_size": 65536 00:28:11.202 }, 00:28:11.202 { 00:28:11.202 "name": "BaseBdev3", 00:28:11.202 "uuid": "1121336d-00e6-4f49-b74a-a7f3d361d6b3", 00:28:11.202 "is_configured": true, 00:28:11.202 "data_offset": 0, 00:28:11.202 "data_size": 65536 00:28:11.202 }, 00:28:11.202 { 00:28:11.202 "name": "BaseBdev4", 00:28:11.202 "uuid": "da08c4d4-3b08-42c3-9438-57a87d16cae8", 00:28:11.202 "is_configured": true, 00:28:11.202 "data_offset": 0, 00:28:11.202 "data_size": 65536 00:28:11.202 } 00:28:11.202 ] 00:28:11.202 }' 00:28:11.202 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:11.202 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.458 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:28:11.458 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.458 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.716 [2024-11-05 15:57:43.877746] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:11.716 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.716 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:11.716 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:11.716 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:11.716 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:11.716 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:11.716 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:11.716 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:11.716 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:11.716 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:11.716 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:11.716 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:11.716 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:11.716 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.716 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.716 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.716 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:11.716 "name": "Existed_Raid", 00:28:11.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:11.716 "strip_size_kb": 0, 00:28:11.716 "state": "configuring", 00:28:11.716 "raid_level": "raid1", 00:28:11.716 "superblock": false, 00:28:11.716 "num_base_bdevs": 4, 00:28:11.716 "num_base_bdevs_discovered": 2, 00:28:11.716 "num_base_bdevs_operational": 4, 00:28:11.716 "base_bdevs_list": [ 00:28:11.716 { 00:28:11.716 "name": "BaseBdev1", 00:28:11.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:11.716 "is_configured": false, 00:28:11.716 "data_offset": 0, 00:28:11.716 "data_size": 0 00:28:11.716 }, 00:28:11.716 { 00:28:11.716 "name": null, 00:28:11.716 "uuid": "d11631ae-3a8f-41f8-9c1e-0c08216604c5", 00:28:11.716 "is_configured": false, 00:28:11.716 "data_offset": 0, 00:28:11.716 "data_size": 65536 00:28:11.716 }, 00:28:11.716 { 00:28:11.716 "name": "BaseBdev3", 00:28:11.716 "uuid": "1121336d-00e6-4f49-b74a-a7f3d361d6b3", 00:28:11.716 "is_configured": true, 00:28:11.716 "data_offset": 0, 00:28:11.716 "data_size": 65536 00:28:11.716 }, 00:28:11.716 { 00:28:11.716 "name": "BaseBdev4", 00:28:11.716 "uuid": "da08c4d4-3b08-42c3-9438-57a87d16cae8", 00:28:11.716 "is_configured": true, 00:28:11.716 "data_offset": 0, 00:28:11.716 "data_size": 65536 00:28:11.716 } 00:28:11.716 ] 00:28:11.716 }' 00:28:11.716 15:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:11.716 15:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.975 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:11.975 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:11.975 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.975 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.975 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.975 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:28:11.975 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:28:11.975 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.975 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.975 [2024-11-05 15:57:44.212600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:11.975 BaseBdev1 00:28:11.975 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.975 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:28:11.975 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:28:11.975 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:28:11.975 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:28:11.975 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:28:11.975 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:28:11.975 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:28:11.975 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.975 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.975 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.975 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:11.975 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.975 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.975 [ 00:28:11.975 { 00:28:11.975 "name": "BaseBdev1", 00:28:11.975 "aliases": [ 00:28:11.975 "a2eec878-7890-4dd9-aa05-967c9ce2c4cb" 00:28:11.975 ], 00:28:11.975 "product_name": "Malloc disk", 00:28:11.975 "block_size": 512, 00:28:11.975 "num_blocks": 65536, 00:28:11.975 "uuid": "a2eec878-7890-4dd9-aa05-967c9ce2c4cb", 00:28:11.975 "assigned_rate_limits": { 00:28:11.975 "rw_ios_per_sec": 0, 00:28:11.975 "rw_mbytes_per_sec": 0, 00:28:11.975 "r_mbytes_per_sec": 0, 00:28:11.975 "w_mbytes_per_sec": 0 00:28:11.975 }, 00:28:11.975 "claimed": true, 00:28:11.975 "claim_type": "exclusive_write", 00:28:11.975 "zoned": false, 00:28:11.975 "supported_io_types": { 00:28:11.975 "read": true, 00:28:11.975 "write": true, 00:28:11.975 "unmap": true, 00:28:11.975 "flush": true, 00:28:11.975 "reset": true, 00:28:11.975 "nvme_admin": false, 00:28:11.975 "nvme_io": false, 00:28:11.975 "nvme_io_md": false, 00:28:11.975 "write_zeroes": true, 00:28:11.975 "zcopy": true, 00:28:11.975 "get_zone_info": false, 00:28:11.975 "zone_management": false, 00:28:11.975 "zone_append": false, 00:28:11.975 "compare": false, 00:28:11.975 "compare_and_write": false, 00:28:11.975 "abort": true, 00:28:11.975 "seek_hole": false, 00:28:11.975 "seek_data": false, 00:28:11.975 "copy": true, 00:28:11.975 "nvme_iov_md": false 00:28:11.975 }, 00:28:11.975 "memory_domains": [ 00:28:11.975 { 00:28:11.975 "dma_device_id": "system", 00:28:11.975 "dma_device_type": 1 00:28:11.975 }, 00:28:11.975 { 00:28:11.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:11.975 "dma_device_type": 2 00:28:11.975 } 00:28:11.975 ], 00:28:11.975 "driver_specific": {} 00:28:11.975 } 00:28:11.975 ] 00:28:11.975 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.975 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:28:11.975 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:11.975 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:11.975 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:11.975 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:11.975 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:11.975 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:11.975 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:11.976 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:11.976 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:11.976 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:11.976 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:11.976 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:11.976 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.976 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.976 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.976 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:11.976 "name": "Existed_Raid", 00:28:11.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:11.976 "strip_size_kb": 0, 00:28:11.976 "state": "configuring", 00:28:11.976 "raid_level": "raid1", 00:28:11.976 "superblock": false, 00:28:11.976 "num_base_bdevs": 4, 00:28:11.976 "num_base_bdevs_discovered": 3, 00:28:11.976 "num_base_bdevs_operational": 4, 00:28:11.976 "base_bdevs_list": [ 00:28:11.976 { 00:28:11.976 "name": "BaseBdev1", 00:28:11.976 "uuid": "a2eec878-7890-4dd9-aa05-967c9ce2c4cb", 00:28:11.976 "is_configured": true, 00:28:11.976 "data_offset": 0, 00:28:11.976 "data_size": 65536 00:28:11.976 }, 00:28:11.976 { 00:28:11.976 "name": null, 00:28:11.976 "uuid": "d11631ae-3a8f-41f8-9c1e-0c08216604c5", 00:28:11.976 "is_configured": false, 00:28:11.976 "data_offset": 0, 00:28:11.976 "data_size": 65536 00:28:11.976 }, 00:28:11.976 { 00:28:11.976 "name": "BaseBdev3", 00:28:11.976 "uuid": "1121336d-00e6-4f49-b74a-a7f3d361d6b3", 00:28:11.976 "is_configured": true, 00:28:11.976 "data_offset": 0, 00:28:11.976 "data_size": 65536 00:28:11.976 }, 00:28:11.976 { 00:28:11.976 "name": "BaseBdev4", 00:28:11.976 "uuid": "da08c4d4-3b08-42c3-9438-57a87d16cae8", 00:28:11.976 "is_configured": true, 00:28:11.976 "data_offset": 0, 00:28:11.976 "data_size": 65536 00:28:11.976 } 00:28:11.976 ] 00:28:11.976 }' 00:28:11.976 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:11.976 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:12.234 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:12.234 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.234 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:12.234 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:12.234 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.234 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:28:12.234 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:28:12.234 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.234 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:12.234 [2024-11-05 15:57:44.576757] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:12.234 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.234 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:12.234 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:12.234 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:12.234 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:12.234 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:12.234 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:12.234 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:12.234 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:12.234 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:12.234 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:12.234 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:12.234 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:12.234 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.234 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:12.234 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.234 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:12.234 "name": "Existed_Raid", 00:28:12.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:12.234 "strip_size_kb": 0, 00:28:12.234 "state": "configuring", 00:28:12.234 "raid_level": "raid1", 00:28:12.234 "superblock": false, 00:28:12.234 "num_base_bdevs": 4, 00:28:12.234 "num_base_bdevs_discovered": 2, 00:28:12.234 "num_base_bdevs_operational": 4, 00:28:12.234 "base_bdevs_list": [ 00:28:12.234 { 00:28:12.234 "name": "BaseBdev1", 00:28:12.234 "uuid": "a2eec878-7890-4dd9-aa05-967c9ce2c4cb", 00:28:12.234 "is_configured": true, 00:28:12.234 "data_offset": 0, 00:28:12.234 "data_size": 65536 00:28:12.234 }, 00:28:12.234 { 00:28:12.234 "name": null, 00:28:12.234 "uuid": "d11631ae-3a8f-41f8-9c1e-0c08216604c5", 00:28:12.234 "is_configured": false, 00:28:12.234 "data_offset": 0, 00:28:12.234 "data_size": 65536 00:28:12.234 }, 00:28:12.234 { 00:28:12.234 "name": null, 00:28:12.234 "uuid": "1121336d-00e6-4f49-b74a-a7f3d361d6b3", 00:28:12.234 "is_configured": false, 00:28:12.234 "data_offset": 0, 00:28:12.234 "data_size": 65536 00:28:12.234 }, 00:28:12.234 { 00:28:12.234 "name": "BaseBdev4", 00:28:12.234 "uuid": "da08c4d4-3b08-42c3-9438-57a87d16cae8", 00:28:12.234 "is_configured": true, 00:28:12.234 "data_offset": 0, 00:28:12.234 "data_size": 65536 00:28:12.234 } 00:28:12.234 ] 00:28:12.234 }' 00:28:12.234 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:12.234 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:12.492 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:12.492 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:12.492 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.492 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:12.750 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.750 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:28:12.750 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:28:12.750 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.750 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:12.750 [2024-11-05 15:57:44.928832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:12.750 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.750 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:12.750 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:12.750 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:12.750 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:12.750 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:12.750 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:12.750 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:12.751 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:12.751 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:12.751 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:12.751 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:12.751 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.751 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:12.751 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:12.751 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.751 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:12.751 "name": "Existed_Raid", 00:28:12.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:12.751 "strip_size_kb": 0, 00:28:12.751 "state": "configuring", 00:28:12.751 "raid_level": "raid1", 00:28:12.751 "superblock": false, 00:28:12.751 "num_base_bdevs": 4, 00:28:12.751 "num_base_bdevs_discovered": 3, 00:28:12.751 "num_base_bdevs_operational": 4, 00:28:12.751 "base_bdevs_list": [ 00:28:12.751 { 00:28:12.751 "name": "BaseBdev1", 00:28:12.751 "uuid": "a2eec878-7890-4dd9-aa05-967c9ce2c4cb", 00:28:12.751 "is_configured": true, 00:28:12.751 "data_offset": 0, 00:28:12.751 "data_size": 65536 00:28:12.751 }, 00:28:12.751 { 00:28:12.751 "name": null, 00:28:12.751 "uuid": "d11631ae-3a8f-41f8-9c1e-0c08216604c5", 00:28:12.751 "is_configured": false, 00:28:12.751 "data_offset": 0, 00:28:12.751 "data_size": 65536 00:28:12.751 }, 00:28:12.751 { 00:28:12.751 "name": "BaseBdev3", 00:28:12.751 "uuid": "1121336d-00e6-4f49-b74a-a7f3d361d6b3", 00:28:12.751 "is_configured": true, 00:28:12.751 "data_offset": 0, 00:28:12.751 "data_size": 65536 00:28:12.751 }, 00:28:12.751 { 00:28:12.751 "name": "BaseBdev4", 00:28:12.751 "uuid": "da08c4d4-3b08-42c3-9438-57a87d16cae8", 00:28:12.751 "is_configured": true, 00:28:12.751 "data_offset": 0, 00:28:12.751 "data_size": 65536 00:28:12.751 } 00:28:12.751 ] 00:28:12.751 }' 00:28:12.751 15:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:12.751 15:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.009 15:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:13.009 15:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:13.009 15:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.009 15:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.009 15:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.009 15:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:28:13.009 15:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:28:13.009 15:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.009 15:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.009 [2024-11-05 15:57:45.304944] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:13.009 15:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.009 15:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:13.009 15:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:13.009 15:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:13.009 15:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:13.009 15:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:13.009 15:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:13.009 15:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:13.009 15:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:13.009 15:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:13.009 15:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:13.009 15:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:13.009 15:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.009 15:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.009 15:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:13.009 15:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.009 15:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:13.009 "name": "Existed_Raid", 00:28:13.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:13.009 "strip_size_kb": 0, 00:28:13.009 "state": "configuring", 00:28:13.009 "raid_level": "raid1", 00:28:13.009 "superblock": false, 00:28:13.009 "num_base_bdevs": 4, 00:28:13.009 "num_base_bdevs_discovered": 2, 00:28:13.009 "num_base_bdevs_operational": 4, 00:28:13.009 "base_bdevs_list": [ 00:28:13.009 { 00:28:13.009 "name": null, 00:28:13.009 "uuid": "a2eec878-7890-4dd9-aa05-967c9ce2c4cb", 00:28:13.009 "is_configured": false, 00:28:13.009 "data_offset": 0, 00:28:13.009 "data_size": 65536 00:28:13.009 }, 00:28:13.009 { 00:28:13.009 "name": null, 00:28:13.009 "uuid": "d11631ae-3a8f-41f8-9c1e-0c08216604c5", 00:28:13.009 "is_configured": false, 00:28:13.009 "data_offset": 0, 00:28:13.009 "data_size": 65536 00:28:13.009 }, 00:28:13.009 { 00:28:13.009 "name": "BaseBdev3", 00:28:13.009 "uuid": "1121336d-00e6-4f49-b74a-a7f3d361d6b3", 00:28:13.009 "is_configured": true, 00:28:13.009 "data_offset": 0, 00:28:13.009 "data_size": 65536 00:28:13.009 }, 00:28:13.009 { 00:28:13.009 "name": "BaseBdev4", 00:28:13.009 "uuid": "da08c4d4-3b08-42c3-9438-57a87d16cae8", 00:28:13.009 "is_configured": true, 00:28:13.009 "data_offset": 0, 00:28:13.009 "data_size": 65536 00:28:13.009 } 00:28:13.009 ] 00:28:13.009 }' 00:28:13.009 15:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:13.009 15:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.575 15:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:13.575 15:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:13.575 15:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.575 15:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.576 15:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.576 15:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:28:13.576 15:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:28:13.576 15:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.576 15:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.576 [2024-11-05 15:57:45.759942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:13.576 15:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.576 15:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:13.576 15:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:13.576 15:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:13.576 15:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:13.576 15:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:13.576 15:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:13.576 15:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:13.576 15:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:13.576 15:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:13.576 15:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:13.576 15:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:13.576 15:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:13.576 15:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.576 15:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.576 15:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.576 15:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:13.576 "name": "Existed_Raid", 00:28:13.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:13.576 "strip_size_kb": 0, 00:28:13.576 "state": "configuring", 00:28:13.576 "raid_level": "raid1", 00:28:13.576 "superblock": false, 00:28:13.576 "num_base_bdevs": 4, 00:28:13.576 "num_base_bdevs_discovered": 3, 00:28:13.576 "num_base_bdevs_operational": 4, 00:28:13.576 "base_bdevs_list": [ 00:28:13.576 { 00:28:13.576 "name": null, 00:28:13.576 "uuid": "a2eec878-7890-4dd9-aa05-967c9ce2c4cb", 00:28:13.576 "is_configured": false, 00:28:13.576 "data_offset": 0, 00:28:13.576 "data_size": 65536 00:28:13.576 }, 00:28:13.576 { 00:28:13.576 "name": "BaseBdev2", 00:28:13.576 "uuid": "d11631ae-3a8f-41f8-9c1e-0c08216604c5", 00:28:13.576 "is_configured": true, 00:28:13.576 "data_offset": 0, 00:28:13.576 "data_size": 65536 00:28:13.576 }, 00:28:13.576 { 00:28:13.576 "name": "BaseBdev3", 00:28:13.576 "uuid": "1121336d-00e6-4f49-b74a-a7f3d361d6b3", 00:28:13.576 "is_configured": true, 00:28:13.576 "data_offset": 0, 00:28:13.576 "data_size": 65536 00:28:13.576 }, 00:28:13.576 { 00:28:13.576 "name": "BaseBdev4", 00:28:13.576 "uuid": "da08c4d4-3b08-42c3-9438-57a87d16cae8", 00:28:13.576 "is_configured": true, 00:28:13.576 "data_offset": 0, 00:28:13.576 "data_size": 65536 00:28:13.576 } 00:28:13.576 ] 00:28:13.576 }' 00:28:13.576 15:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:13.576 15:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a2eec878-7890-4dd9-aa05-967c9ce2c4cb 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.833 [2024-11-05 15:57:46.158068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:28:13.833 [2024-11-05 15:57:46.158107] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:28:13.833 [2024-11-05 15:57:46.158116] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:28:13.833 [2024-11-05 15:57:46.158376] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:28:13.833 [2024-11-05 15:57:46.158531] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:28:13.833 [2024-11-05 15:57:46.158565] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:28:13.833 [2024-11-05 15:57:46.158780] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:13.833 NewBaseBdev 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.833 [ 00:28:13.833 { 00:28:13.833 "name": "NewBaseBdev", 00:28:13.833 "aliases": [ 00:28:13.833 "a2eec878-7890-4dd9-aa05-967c9ce2c4cb" 00:28:13.833 ], 00:28:13.833 "product_name": "Malloc disk", 00:28:13.833 "block_size": 512, 00:28:13.833 "num_blocks": 65536, 00:28:13.833 "uuid": "a2eec878-7890-4dd9-aa05-967c9ce2c4cb", 00:28:13.833 "assigned_rate_limits": { 00:28:13.833 "rw_ios_per_sec": 0, 00:28:13.833 "rw_mbytes_per_sec": 0, 00:28:13.833 "r_mbytes_per_sec": 0, 00:28:13.833 "w_mbytes_per_sec": 0 00:28:13.833 }, 00:28:13.833 "claimed": true, 00:28:13.833 "claim_type": "exclusive_write", 00:28:13.833 "zoned": false, 00:28:13.833 "supported_io_types": { 00:28:13.833 "read": true, 00:28:13.833 "write": true, 00:28:13.833 "unmap": true, 00:28:13.833 "flush": true, 00:28:13.833 "reset": true, 00:28:13.833 "nvme_admin": false, 00:28:13.833 "nvme_io": false, 00:28:13.833 "nvme_io_md": false, 00:28:13.833 "write_zeroes": true, 00:28:13.833 "zcopy": true, 00:28:13.833 "get_zone_info": false, 00:28:13.833 "zone_management": false, 00:28:13.833 "zone_append": false, 00:28:13.833 "compare": false, 00:28:13.833 "compare_and_write": false, 00:28:13.833 "abort": true, 00:28:13.833 "seek_hole": false, 00:28:13.833 "seek_data": false, 00:28:13.833 "copy": true, 00:28:13.833 "nvme_iov_md": false 00:28:13.833 }, 00:28:13.833 "memory_domains": [ 00:28:13.833 { 00:28:13.833 "dma_device_id": "system", 00:28:13.833 "dma_device_type": 1 00:28:13.833 }, 00:28:13.833 { 00:28:13.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:13.833 "dma_device_type": 2 00:28:13.833 } 00:28:13.833 ], 00:28:13.833 "driver_specific": {} 00:28:13.833 } 00:28:13.833 ] 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.833 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:13.833 "name": "Existed_Raid", 00:28:13.833 "uuid": "0c952c6e-b491-47bb-99c9-bbc941600034", 00:28:13.833 "strip_size_kb": 0, 00:28:13.833 "state": "online", 00:28:13.833 "raid_level": "raid1", 00:28:13.833 "superblock": false, 00:28:13.833 "num_base_bdevs": 4, 00:28:13.833 "num_base_bdevs_discovered": 4, 00:28:13.833 "num_base_bdevs_operational": 4, 00:28:13.833 "base_bdevs_list": [ 00:28:13.833 { 00:28:13.833 "name": "NewBaseBdev", 00:28:13.833 "uuid": "a2eec878-7890-4dd9-aa05-967c9ce2c4cb", 00:28:13.833 "is_configured": true, 00:28:13.833 "data_offset": 0, 00:28:13.833 "data_size": 65536 00:28:13.833 }, 00:28:13.833 { 00:28:13.833 "name": "BaseBdev2", 00:28:13.833 "uuid": "d11631ae-3a8f-41f8-9c1e-0c08216604c5", 00:28:13.833 "is_configured": true, 00:28:13.833 "data_offset": 0, 00:28:13.833 "data_size": 65536 00:28:13.833 }, 00:28:13.833 { 00:28:13.833 "name": "BaseBdev3", 00:28:13.833 "uuid": "1121336d-00e6-4f49-b74a-a7f3d361d6b3", 00:28:13.833 "is_configured": true, 00:28:13.834 "data_offset": 0, 00:28:13.834 "data_size": 65536 00:28:13.834 }, 00:28:13.834 { 00:28:13.834 "name": "BaseBdev4", 00:28:13.834 "uuid": "da08c4d4-3b08-42c3-9438-57a87d16cae8", 00:28:13.834 "is_configured": true, 00:28:13.834 "data_offset": 0, 00:28:13.834 "data_size": 65536 00:28:13.834 } 00:28:13.834 ] 00:28:13.834 }' 00:28:13.834 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:13.834 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:14.090 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:28:14.090 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:28:14.091 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:14.091 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:14.091 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:28:14.091 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:14.091 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:28:14.091 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:14.091 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.091 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:14.091 [2024-11-05 15:57:46.498571] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:14.349 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.349 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:14.349 "name": "Existed_Raid", 00:28:14.349 "aliases": [ 00:28:14.349 "0c952c6e-b491-47bb-99c9-bbc941600034" 00:28:14.349 ], 00:28:14.349 "product_name": "Raid Volume", 00:28:14.349 "block_size": 512, 00:28:14.349 "num_blocks": 65536, 00:28:14.349 "uuid": "0c952c6e-b491-47bb-99c9-bbc941600034", 00:28:14.349 "assigned_rate_limits": { 00:28:14.349 "rw_ios_per_sec": 0, 00:28:14.349 "rw_mbytes_per_sec": 0, 00:28:14.349 "r_mbytes_per_sec": 0, 00:28:14.349 "w_mbytes_per_sec": 0 00:28:14.349 }, 00:28:14.349 "claimed": false, 00:28:14.349 "zoned": false, 00:28:14.349 "supported_io_types": { 00:28:14.349 "read": true, 00:28:14.349 "write": true, 00:28:14.349 "unmap": false, 00:28:14.349 "flush": false, 00:28:14.349 "reset": true, 00:28:14.349 "nvme_admin": false, 00:28:14.349 "nvme_io": false, 00:28:14.349 "nvme_io_md": false, 00:28:14.349 "write_zeroes": true, 00:28:14.349 "zcopy": false, 00:28:14.349 "get_zone_info": false, 00:28:14.349 "zone_management": false, 00:28:14.349 "zone_append": false, 00:28:14.349 "compare": false, 00:28:14.349 "compare_and_write": false, 00:28:14.349 "abort": false, 00:28:14.349 "seek_hole": false, 00:28:14.349 "seek_data": false, 00:28:14.349 "copy": false, 00:28:14.349 "nvme_iov_md": false 00:28:14.349 }, 00:28:14.349 "memory_domains": [ 00:28:14.349 { 00:28:14.349 "dma_device_id": "system", 00:28:14.349 "dma_device_type": 1 00:28:14.349 }, 00:28:14.349 { 00:28:14.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:14.349 "dma_device_type": 2 00:28:14.349 }, 00:28:14.349 { 00:28:14.349 "dma_device_id": "system", 00:28:14.349 "dma_device_type": 1 00:28:14.349 }, 00:28:14.349 { 00:28:14.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:14.349 "dma_device_type": 2 00:28:14.349 }, 00:28:14.349 { 00:28:14.349 "dma_device_id": "system", 00:28:14.349 "dma_device_type": 1 00:28:14.349 }, 00:28:14.349 { 00:28:14.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:14.349 "dma_device_type": 2 00:28:14.349 }, 00:28:14.349 { 00:28:14.349 "dma_device_id": "system", 00:28:14.349 "dma_device_type": 1 00:28:14.349 }, 00:28:14.349 { 00:28:14.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:14.349 "dma_device_type": 2 00:28:14.349 } 00:28:14.349 ], 00:28:14.349 "driver_specific": { 00:28:14.349 "raid": { 00:28:14.349 "uuid": "0c952c6e-b491-47bb-99c9-bbc941600034", 00:28:14.349 "strip_size_kb": 0, 00:28:14.349 "state": "online", 00:28:14.349 "raid_level": "raid1", 00:28:14.349 "superblock": false, 00:28:14.349 "num_base_bdevs": 4, 00:28:14.349 "num_base_bdevs_discovered": 4, 00:28:14.349 "num_base_bdevs_operational": 4, 00:28:14.349 "base_bdevs_list": [ 00:28:14.349 { 00:28:14.349 "name": "NewBaseBdev", 00:28:14.349 "uuid": "a2eec878-7890-4dd9-aa05-967c9ce2c4cb", 00:28:14.349 "is_configured": true, 00:28:14.349 "data_offset": 0, 00:28:14.349 "data_size": 65536 00:28:14.349 }, 00:28:14.349 { 00:28:14.349 "name": "BaseBdev2", 00:28:14.349 "uuid": "d11631ae-3a8f-41f8-9c1e-0c08216604c5", 00:28:14.349 "is_configured": true, 00:28:14.349 "data_offset": 0, 00:28:14.349 "data_size": 65536 00:28:14.349 }, 00:28:14.349 { 00:28:14.349 "name": "BaseBdev3", 00:28:14.349 "uuid": "1121336d-00e6-4f49-b74a-a7f3d361d6b3", 00:28:14.349 "is_configured": true, 00:28:14.349 "data_offset": 0, 00:28:14.349 "data_size": 65536 00:28:14.349 }, 00:28:14.349 { 00:28:14.349 "name": "BaseBdev4", 00:28:14.349 "uuid": "da08c4d4-3b08-42c3-9438-57a87d16cae8", 00:28:14.349 "is_configured": true, 00:28:14.349 "data_offset": 0, 00:28:14.349 "data_size": 65536 00:28:14.349 } 00:28:14.349 ] 00:28:14.349 } 00:28:14.349 } 00:28:14.349 }' 00:28:14.349 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:14.349 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:28:14.349 BaseBdev2 00:28:14.349 BaseBdev3 00:28:14.349 BaseBdev4' 00:28:14.349 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:14.349 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:28:14.349 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:14.349 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:14.349 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:28:14.349 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.349 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:14.349 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.349 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:14.349 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:14.349 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:14.349 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:28:14.349 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:14.349 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.349 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:14.350 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.350 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:14.350 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:14.350 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:14.350 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:28:14.350 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.350 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:14.350 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:14.350 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.350 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:14.350 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:14.350 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:14.350 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:28:14.350 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.350 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:14.350 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:14.350 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.350 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:14.350 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:14.350 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:28:14.350 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.350 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:14.350 [2024-11-05 15:57:46.746237] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:14.350 [2024-11-05 15:57:46.746262] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:14.350 [2024-11-05 15:57:46.746329] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:14.350 [2024-11-05 15:57:46.746621] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:14.350 [2024-11-05 15:57:46.746640] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:28:14.350 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.350 15:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71029 00:28:14.350 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 71029 ']' 00:28:14.350 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 71029 00:28:14.350 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:28:14.350 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:14.350 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71029 00:28:14.607 killing process with pid 71029 00:28:14.607 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:14.607 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:14.607 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71029' 00:28:14.607 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 71029 00:28:14.607 [2024-11-05 15:57:46.775171] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:14.607 15:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 71029 00:28:14.607 [2024-11-05 15:57:47.015264] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:28:15.538 00:28:15.538 real 0m8.290s 00:28:15.538 user 0m13.318s 00:28:15.538 sys 0m1.243s 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:15.538 ************************************ 00:28:15.538 END TEST raid_state_function_test 00:28:15.538 ************************************ 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:15.538 15:57:47 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:28:15.538 15:57:47 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:28:15.538 15:57:47 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:15.538 15:57:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:15.538 ************************************ 00:28:15.538 START TEST raid_state_function_test_sb 00:28:15.538 ************************************ 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 true 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71664 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71664' 00:28:15.538 Process raid pid: 71664 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71664 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 71664 ']' 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:28:15.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:15.538 15:57:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:15.538 [2024-11-05 15:57:47.804276] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:28:15.538 [2024-11-05 15:57:47.804394] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:15.795 [2024-11-05 15:57:47.962054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.795 [2024-11-05 15:57:48.058414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.795 [2024-11-05 15:57:48.196345] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:15.795 [2024-11-05 15:57:48.196379] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:16.361 15:57:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:16.361 15:57:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:28:16.361 15:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:28:16.361 15:57:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.361 15:57:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.361 [2024-11-05 15:57:48.648924] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:16.361 [2024-11-05 15:57:48.648972] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:16.361 [2024-11-05 15:57:48.648982] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:16.361 [2024-11-05 15:57:48.648991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:16.361 [2024-11-05 15:57:48.648998] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:16.361 [2024-11-05 15:57:48.649006] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:16.361 [2024-11-05 15:57:48.649016] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:16.361 [2024-11-05 15:57:48.649025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:16.361 15:57:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.361 15:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:16.361 15:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:16.361 15:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:16.361 15:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:16.361 15:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:16.361 15:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:16.361 15:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:16.361 15:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:16.361 15:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:16.361 15:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:16.361 15:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:16.361 15:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:16.361 15:57:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.361 15:57:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.361 15:57:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.361 15:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:16.361 "name": "Existed_Raid", 00:28:16.361 "uuid": "239f2e03-66aa-4116-b394-0dc204ce9dff", 00:28:16.361 "strip_size_kb": 0, 00:28:16.361 "state": "configuring", 00:28:16.361 "raid_level": "raid1", 00:28:16.361 "superblock": true, 00:28:16.361 "num_base_bdevs": 4, 00:28:16.361 "num_base_bdevs_discovered": 0, 00:28:16.361 "num_base_bdevs_operational": 4, 00:28:16.361 "base_bdevs_list": [ 00:28:16.361 { 00:28:16.361 "name": "BaseBdev1", 00:28:16.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:16.361 "is_configured": false, 00:28:16.361 "data_offset": 0, 00:28:16.361 "data_size": 0 00:28:16.361 }, 00:28:16.361 { 00:28:16.361 "name": "BaseBdev2", 00:28:16.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:16.361 "is_configured": false, 00:28:16.361 "data_offset": 0, 00:28:16.361 "data_size": 0 00:28:16.361 }, 00:28:16.361 { 00:28:16.361 "name": "BaseBdev3", 00:28:16.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:16.361 "is_configured": false, 00:28:16.361 "data_offset": 0, 00:28:16.361 "data_size": 0 00:28:16.361 }, 00:28:16.361 { 00:28:16.361 "name": "BaseBdev4", 00:28:16.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:16.361 "is_configured": false, 00:28:16.361 "data_offset": 0, 00:28:16.361 "data_size": 0 00:28:16.361 } 00:28:16.361 ] 00:28:16.361 }' 00:28:16.361 15:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:16.361 15:57:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.619 15:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:28:16.619 15:57:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.619 15:57:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.619 [2024-11-05 15:57:48.976961] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:16.619 [2024-11-05 15:57:48.976997] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:28:16.619 15:57:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.619 15:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:28:16.619 15:57:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.619 15:57:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.619 [2024-11-05 15:57:48.984966] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:16.619 [2024-11-05 15:57:48.984999] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:16.619 [2024-11-05 15:57:48.985007] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:16.619 [2024-11-05 15:57:48.985016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:16.619 [2024-11-05 15:57:48.985022] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:16.619 [2024-11-05 15:57:48.985029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:16.619 [2024-11-05 15:57:48.985035] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:16.619 [2024-11-05 15:57:48.985044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:16.620 15:57:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.620 15:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:28:16.620 15:57:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.620 15:57:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.620 [2024-11-05 15:57:49.016904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:16.620 BaseBdev1 00:28:16.620 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.620 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:28:16.620 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:28:16.620 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:28:16.620 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:28:16.620 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:28:16.620 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:28:16.620 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:28:16.620 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.620 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.620 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.620 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:16.620 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.620 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.877 [ 00:28:16.877 { 00:28:16.877 "name": "BaseBdev1", 00:28:16.877 "aliases": [ 00:28:16.877 "55cf8a28-37b8-49bc-a2a3-788f7695ba2e" 00:28:16.877 ], 00:28:16.877 "product_name": "Malloc disk", 00:28:16.877 "block_size": 512, 00:28:16.877 "num_blocks": 65536, 00:28:16.877 "uuid": "55cf8a28-37b8-49bc-a2a3-788f7695ba2e", 00:28:16.877 "assigned_rate_limits": { 00:28:16.877 "rw_ios_per_sec": 0, 00:28:16.877 "rw_mbytes_per_sec": 0, 00:28:16.877 "r_mbytes_per_sec": 0, 00:28:16.877 "w_mbytes_per_sec": 0 00:28:16.877 }, 00:28:16.877 "claimed": true, 00:28:16.877 "claim_type": "exclusive_write", 00:28:16.877 "zoned": false, 00:28:16.877 "supported_io_types": { 00:28:16.877 "read": true, 00:28:16.877 "write": true, 00:28:16.877 "unmap": true, 00:28:16.877 "flush": true, 00:28:16.877 "reset": true, 00:28:16.877 "nvme_admin": false, 00:28:16.877 "nvme_io": false, 00:28:16.877 "nvme_io_md": false, 00:28:16.877 "write_zeroes": true, 00:28:16.877 "zcopy": true, 00:28:16.877 "get_zone_info": false, 00:28:16.877 "zone_management": false, 00:28:16.877 "zone_append": false, 00:28:16.877 "compare": false, 00:28:16.877 "compare_and_write": false, 00:28:16.877 "abort": true, 00:28:16.877 "seek_hole": false, 00:28:16.877 "seek_data": false, 00:28:16.877 "copy": true, 00:28:16.877 "nvme_iov_md": false 00:28:16.877 }, 00:28:16.877 "memory_domains": [ 00:28:16.877 { 00:28:16.877 "dma_device_id": "system", 00:28:16.877 "dma_device_type": 1 00:28:16.877 }, 00:28:16.877 { 00:28:16.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:16.877 "dma_device_type": 2 00:28:16.877 } 00:28:16.877 ], 00:28:16.877 "driver_specific": {} 00:28:16.877 } 00:28:16.877 ] 00:28:16.877 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.877 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:28:16.877 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:16.877 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:16.877 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:16.877 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:16.877 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:16.877 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:16.877 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:16.877 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:16.877 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:16.877 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:16.877 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:16.877 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:16.877 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.877 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.877 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.877 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:16.877 "name": "Existed_Raid", 00:28:16.877 "uuid": "9f23f346-2f65-4129-b9bd-d8e581b4013a", 00:28:16.877 "strip_size_kb": 0, 00:28:16.877 "state": "configuring", 00:28:16.877 "raid_level": "raid1", 00:28:16.877 "superblock": true, 00:28:16.877 "num_base_bdevs": 4, 00:28:16.877 "num_base_bdevs_discovered": 1, 00:28:16.877 "num_base_bdevs_operational": 4, 00:28:16.877 "base_bdevs_list": [ 00:28:16.877 { 00:28:16.877 "name": "BaseBdev1", 00:28:16.877 "uuid": "55cf8a28-37b8-49bc-a2a3-788f7695ba2e", 00:28:16.877 "is_configured": true, 00:28:16.877 "data_offset": 2048, 00:28:16.877 "data_size": 63488 00:28:16.877 }, 00:28:16.877 { 00:28:16.877 "name": "BaseBdev2", 00:28:16.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:16.877 "is_configured": false, 00:28:16.877 "data_offset": 0, 00:28:16.877 "data_size": 0 00:28:16.877 }, 00:28:16.877 { 00:28:16.877 "name": "BaseBdev3", 00:28:16.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:16.877 "is_configured": false, 00:28:16.877 "data_offset": 0, 00:28:16.877 "data_size": 0 00:28:16.877 }, 00:28:16.877 { 00:28:16.877 "name": "BaseBdev4", 00:28:16.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:16.877 "is_configured": false, 00:28:16.877 "data_offset": 0, 00:28:16.877 "data_size": 0 00:28:16.877 } 00:28:16.877 ] 00:28:16.877 }' 00:28:16.877 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:16.877 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.135 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:28:17.135 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.135 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.135 [2024-11-05 15:57:49.360994] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:17.135 [2024-11-05 15:57:49.361036] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:28:17.135 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.135 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:28:17.135 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.135 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.135 [2024-11-05 15:57:49.369055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:17.135 [2024-11-05 15:57:49.370871] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:17.135 [2024-11-05 15:57:49.370908] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:17.135 [2024-11-05 15:57:49.370917] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:17.135 [2024-11-05 15:57:49.370927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:17.135 [2024-11-05 15:57:49.370934] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:17.135 [2024-11-05 15:57:49.370942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:17.135 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.135 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:28:17.135 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:17.135 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:17.135 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:17.135 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:17.135 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:17.135 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:17.135 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:17.135 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:17.135 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:17.135 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:17.135 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:17.135 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:17.135 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:17.135 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.135 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.135 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.135 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:17.135 "name": "Existed_Raid", 00:28:17.135 "uuid": "f63aef30-6388-4288-9473-8a90e7984420", 00:28:17.135 "strip_size_kb": 0, 00:28:17.135 "state": "configuring", 00:28:17.135 "raid_level": "raid1", 00:28:17.135 "superblock": true, 00:28:17.135 "num_base_bdevs": 4, 00:28:17.135 "num_base_bdevs_discovered": 1, 00:28:17.135 "num_base_bdevs_operational": 4, 00:28:17.135 "base_bdevs_list": [ 00:28:17.135 { 00:28:17.135 "name": "BaseBdev1", 00:28:17.135 "uuid": "55cf8a28-37b8-49bc-a2a3-788f7695ba2e", 00:28:17.135 "is_configured": true, 00:28:17.135 "data_offset": 2048, 00:28:17.135 "data_size": 63488 00:28:17.135 }, 00:28:17.135 { 00:28:17.135 "name": "BaseBdev2", 00:28:17.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:17.136 "is_configured": false, 00:28:17.136 "data_offset": 0, 00:28:17.136 "data_size": 0 00:28:17.136 }, 00:28:17.136 { 00:28:17.136 "name": "BaseBdev3", 00:28:17.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:17.136 "is_configured": false, 00:28:17.136 "data_offset": 0, 00:28:17.136 "data_size": 0 00:28:17.136 }, 00:28:17.136 { 00:28:17.136 "name": "BaseBdev4", 00:28:17.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:17.136 "is_configured": false, 00:28:17.136 "data_offset": 0, 00:28:17.136 "data_size": 0 00:28:17.136 } 00:28:17.136 ] 00:28:17.136 }' 00:28:17.136 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:17.136 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.397 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:28:17.397 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.397 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.397 [2024-11-05 15:57:49.707425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:17.397 BaseBdev2 00:28:17.397 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.397 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:28:17.397 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:28:17.397 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:28:17.397 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:28:17.397 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:28:17.397 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:28:17.397 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:28:17.397 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.397 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.397 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.397 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:17.397 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.397 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.397 [ 00:28:17.397 { 00:28:17.397 "name": "BaseBdev2", 00:28:17.397 "aliases": [ 00:28:17.397 "e35a6ec0-800a-4d28-b2fb-c0731691670e" 00:28:17.397 ], 00:28:17.397 "product_name": "Malloc disk", 00:28:17.397 "block_size": 512, 00:28:17.398 "num_blocks": 65536, 00:28:17.398 "uuid": "e35a6ec0-800a-4d28-b2fb-c0731691670e", 00:28:17.398 "assigned_rate_limits": { 00:28:17.398 "rw_ios_per_sec": 0, 00:28:17.398 "rw_mbytes_per_sec": 0, 00:28:17.398 "r_mbytes_per_sec": 0, 00:28:17.398 "w_mbytes_per_sec": 0 00:28:17.398 }, 00:28:17.398 "claimed": true, 00:28:17.398 "claim_type": "exclusive_write", 00:28:17.398 "zoned": false, 00:28:17.398 "supported_io_types": { 00:28:17.398 "read": true, 00:28:17.398 "write": true, 00:28:17.398 "unmap": true, 00:28:17.398 "flush": true, 00:28:17.398 "reset": true, 00:28:17.398 "nvme_admin": false, 00:28:17.398 "nvme_io": false, 00:28:17.398 "nvme_io_md": false, 00:28:17.398 "write_zeroes": true, 00:28:17.398 "zcopy": true, 00:28:17.398 "get_zone_info": false, 00:28:17.398 "zone_management": false, 00:28:17.398 "zone_append": false, 00:28:17.398 "compare": false, 00:28:17.398 "compare_and_write": false, 00:28:17.398 "abort": true, 00:28:17.398 "seek_hole": false, 00:28:17.398 "seek_data": false, 00:28:17.398 "copy": true, 00:28:17.398 "nvme_iov_md": false 00:28:17.398 }, 00:28:17.399 "memory_domains": [ 00:28:17.399 { 00:28:17.399 "dma_device_id": "system", 00:28:17.399 "dma_device_type": 1 00:28:17.399 }, 00:28:17.399 { 00:28:17.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:17.399 "dma_device_type": 2 00:28:17.399 } 00:28:17.399 ], 00:28:17.399 "driver_specific": {} 00:28:17.399 } 00:28:17.399 ] 00:28:17.399 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.399 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:28:17.399 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:28:17.399 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:17.400 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:17.400 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:17.400 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:17.400 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:17.400 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:17.400 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:17.400 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:17.400 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:17.400 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:17.400 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:17.400 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:17.400 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:17.400 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.400 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.400 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.400 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:17.400 "name": "Existed_Raid", 00:28:17.400 "uuid": "f63aef30-6388-4288-9473-8a90e7984420", 00:28:17.400 "strip_size_kb": 0, 00:28:17.400 "state": "configuring", 00:28:17.400 "raid_level": "raid1", 00:28:17.400 "superblock": true, 00:28:17.400 "num_base_bdevs": 4, 00:28:17.400 "num_base_bdevs_discovered": 2, 00:28:17.400 "num_base_bdevs_operational": 4, 00:28:17.400 "base_bdevs_list": [ 00:28:17.400 { 00:28:17.400 "name": "BaseBdev1", 00:28:17.400 "uuid": "55cf8a28-37b8-49bc-a2a3-788f7695ba2e", 00:28:17.400 "is_configured": true, 00:28:17.401 "data_offset": 2048, 00:28:17.401 "data_size": 63488 00:28:17.401 }, 00:28:17.401 { 00:28:17.401 "name": "BaseBdev2", 00:28:17.401 "uuid": "e35a6ec0-800a-4d28-b2fb-c0731691670e", 00:28:17.401 "is_configured": true, 00:28:17.401 "data_offset": 2048, 00:28:17.401 "data_size": 63488 00:28:17.401 }, 00:28:17.401 { 00:28:17.401 "name": "BaseBdev3", 00:28:17.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:17.401 "is_configured": false, 00:28:17.401 "data_offset": 0, 00:28:17.401 "data_size": 0 00:28:17.401 }, 00:28:17.401 { 00:28:17.401 "name": "BaseBdev4", 00:28:17.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:17.401 "is_configured": false, 00:28:17.401 "data_offset": 0, 00:28:17.401 "data_size": 0 00:28:17.401 } 00:28:17.401 ] 00:28:17.401 }' 00:28:17.401 15:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:17.401 15:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.664 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:28:17.664 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.664 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.922 [2024-11-05 15:57:50.094811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:17.922 BaseBdev3 00:28:17.922 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.922 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:28:17.922 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:28:17.922 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:28:17.922 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:28:17.922 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:28:17.922 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:28:17.922 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:28:17.922 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.922 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.922 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.922 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:17.922 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.922 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.922 [ 00:28:17.922 { 00:28:17.922 "name": "BaseBdev3", 00:28:17.922 "aliases": [ 00:28:17.922 "7d5d69ba-5928-4962-aa6d-47f7a6bee070" 00:28:17.922 ], 00:28:17.922 "product_name": "Malloc disk", 00:28:17.922 "block_size": 512, 00:28:17.922 "num_blocks": 65536, 00:28:17.922 "uuid": "7d5d69ba-5928-4962-aa6d-47f7a6bee070", 00:28:17.922 "assigned_rate_limits": { 00:28:17.922 "rw_ios_per_sec": 0, 00:28:17.922 "rw_mbytes_per_sec": 0, 00:28:17.922 "r_mbytes_per_sec": 0, 00:28:17.922 "w_mbytes_per_sec": 0 00:28:17.922 }, 00:28:17.922 "claimed": true, 00:28:17.922 "claim_type": "exclusive_write", 00:28:17.922 "zoned": false, 00:28:17.922 "supported_io_types": { 00:28:17.922 "read": true, 00:28:17.922 "write": true, 00:28:17.922 "unmap": true, 00:28:17.922 "flush": true, 00:28:17.922 "reset": true, 00:28:17.922 "nvme_admin": false, 00:28:17.922 "nvme_io": false, 00:28:17.922 "nvme_io_md": false, 00:28:17.922 "write_zeroes": true, 00:28:17.922 "zcopy": true, 00:28:17.922 "get_zone_info": false, 00:28:17.922 "zone_management": false, 00:28:17.922 "zone_append": false, 00:28:17.922 "compare": false, 00:28:17.922 "compare_and_write": false, 00:28:17.922 "abort": true, 00:28:17.922 "seek_hole": false, 00:28:17.922 "seek_data": false, 00:28:17.922 "copy": true, 00:28:17.922 "nvme_iov_md": false 00:28:17.922 }, 00:28:17.922 "memory_domains": [ 00:28:17.922 { 00:28:17.922 "dma_device_id": "system", 00:28:17.922 "dma_device_type": 1 00:28:17.922 }, 00:28:17.922 { 00:28:17.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:17.922 "dma_device_type": 2 00:28:17.922 } 00:28:17.922 ], 00:28:17.922 "driver_specific": {} 00:28:17.922 } 00:28:17.922 ] 00:28:17.922 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.922 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:28:17.922 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:28:17.922 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:17.922 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:17.922 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:17.922 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:17.922 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:17.922 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:17.922 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:17.922 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:17.922 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:17.922 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:17.922 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:17.922 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:17.922 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.922 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.922 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:17.922 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.922 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:17.922 "name": "Existed_Raid", 00:28:17.922 "uuid": "f63aef30-6388-4288-9473-8a90e7984420", 00:28:17.922 "strip_size_kb": 0, 00:28:17.922 "state": "configuring", 00:28:17.922 "raid_level": "raid1", 00:28:17.922 "superblock": true, 00:28:17.922 "num_base_bdevs": 4, 00:28:17.922 "num_base_bdevs_discovered": 3, 00:28:17.922 "num_base_bdevs_operational": 4, 00:28:17.922 "base_bdevs_list": [ 00:28:17.922 { 00:28:17.922 "name": "BaseBdev1", 00:28:17.922 "uuid": "55cf8a28-37b8-49bc-a2a3-788f7695ba2e", 00:28:17.922 "is_configured": true, 00:28:17.922 "data_offset": 2048, 00:28:17.922 "data_size": 63488 00:28:17.922 }, 00:28:17.922 { 00:28:17.922 "name": "BaseBdev2", 00:28:17.922 "uuid": "e35a6ec0-800a-4d28-b2fb-c0731691670e", 00:28:17.922 "is_configured": true, 00:28:17.922 "data_offset": 2048, 00:28:17.922 "data_size": 63488 00:28:17.922 }, 00:28:17.922 { 00:28:17.922 "name": "BaseBdev3", 00:28:17.922 "uuid": "7d5d69ba-5928-4962-aa6d-47f7a6bee070", 00:28:17.922 "is_configured": true, 00:28:17.922 "data_offset": 2048, 00:28:17.922 "data_size": 63488 00:28:17.922 }, 00:28:17.922 { 00:28:17.922 "name": "BaseBdev4", 00:28:17.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:17.922 "is_configured": false, 00:28:17.922 "data_offset": 0, 00:28:17.922 "data_size": 0 00:28:17.922 } 00:28:17.922 ] 00:28:17.922 }' 00:28:17.922 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:17.922 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.180 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:28:18.180 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.180 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.180 [2024-11-05 15:57:50.477039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:18.180 [2024-11-05 15:57:50.477249] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:28:18.180 [2024-11-05 15:57:50.477268] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:18.180 [2024-11-05 15:57:50.477533] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:28:18.180 [2024-11-05 15:57:50.477678] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:28:18.180 [2024-11-05 15:57:50.477697] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:28:18.180 [2024-11-05 15:57:50.477825] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:18.180 BaseBdev4 00:28:18.180 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.180 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:28:18.180 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:28:18.180 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:28:18.180 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:28:18.180 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:28:18.180 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:28:18.180 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:28:18.180 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.180 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.180 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.180 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:28:18.180 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.180 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.180 [ 00:28:18.180 { 00:28:18.180 "name": "BaseBdev4", 00:28:18.180 "aliases": [ 00:28:18.180 "c526a357-14ae-492e-a0d3-8d945277359b" 00:28:18.180 ], 00:28:18.180 "product_name": "Malloc disk", 00:28:18.180 "block_size": 512, 00:28:18.180 "num_blocks": 65536, 00:28:18.180 "uuid": "c526a357-14ae-492e-a0d3-8d945277359b", 00:28:18.180 "assigned_rate_limits": { 00:28:18.180 "rw_ios_per_sec": 0, 00:28:18.180 "rw_mbytes_per_sec": 0, 00:28:18.180 "r_mbytes_per_sec": 0, 00:28:18.180 "w_mbytes_per_sec": 0 00:28:18.180 }, 00:28:18.180 "claimed": true, 00:28:18.180 "claim_type": "exclusive_write", 00:28:18.180 "zoned": false, 00:28:18.180 "supported_io_types": { 00:28:18.180 "read": true, 00:28:18.180 "write": true, 00:28:18.180 "unmap": true, 00:28:18.180 "flush": true, 00:28:18.181 "reset": true, 00:28:18.181 "nvme_admin": false, 00:28:18.181 "nvme_io": false, 00:28:18.181 "nvme_io_md": false, 00:28:18.181 "write_zeroes": true, 00:28:18.181 "zcopy": true, 00:28:18.181 "get_zone_info": false, 00:28:18.181 "zone_management": false, 00:28:18.181 "zone_append": false, 00:28:18.181 "compare": false, 00:28:18.181 "compare_and_write": false, 00:28:18.181 "abort": true, 00:28:18.181 "seek_hole": false, 00:28:18.181 "seek_data": false, 00:28:18.181 "copy": true, 00:28:18.181 "nvme_iov_md": false 00:28:18.181 }, 00:28:18.181 "memory_domains": [ 00:28:18.181 { 00:28:18.181 "dma_device_id": "system", 00:28:18.181 "dma_device_type": 1 00:28:18.181 }, 00:28:18.181 { 00:28:18.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:18.181 "dma_device_type": 2 00:28:18.181 } 00:28:18.181 ], 00:28:18.181 "driver_specific": {} 00:28:18.181 } 00:28:18.181 ] 00:28:18.181 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.181 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:28:18.181 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:28:18.181 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:18.181 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:28:18.181 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:18.181 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:18.181 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:18.181 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:18.181 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:18.181 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:18.181 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:18.181 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:18.181 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:18.181 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:18.181 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:18.181 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.181 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.181 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.181 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:18.181 "name": "Existed_Raid", 00:28:18.181 "uuid": "f63aef30-6388-4288-9473-8a90e7984420", 00:28:18.181 "strip_size_kb": 0, 00:28:18.181 "state": "online", 00:28:18.181 "raid_level": "raid1", 00:28:18.181 "superblock": true, 00:28:18.181 "num_base_bdevs": 4, 00:28:18.181 "num_base_bdevs_discovered": 4, 00:28:18.181 "num_base_bdevs_operational": 4, 00:28:18.181 "base_bdevs_list": [ 00:28:18.181 { 00:28:18.181 "name": "BaseBdev1", 00:28:18.181 "uuid": "55cf8a28-37b8-49bc-a2a3-788f7695ba2e", 00:28:18.181 "is_configured": true, 00:28:18.181 "data_offset": 2048, 00:28:18.181 "data_size": 63488 00:28:18.181 }, 00:28:18.181 { 00:28:18.181 "name": "BaseBdev2", 00:28:18.181 "uuid": "e35a6ec0-800a-4d28-b2fb-c0731691670e", 00:28:18.181 "is_configured": true, 00:28:18.181 "data_offset": 2048, 00:28:18.181 "data_size": 63488 00:28:18.181 }, 00:28:18.181 { 00:28:18.181 "name": "BaseBdev3", 00:28:18.181 "uuid": "7d5d69ba-5928-4962-aa6d-47f7a6bee070", 00:28:18.181 "is_configured": true, 00:28:18.181 "data_offset": 2048, 00:28:18.181 "data_size": 63488 00:28:18.181 }, 00:28:18.181 { 00:28:18.181 "name": "BaseBdev4", 00:28:18.181 "uuid": "c526a357-14ae-492e-a0d3-8d945277359b", 00:28:18.181 "is_configured": true, 00:28:18.181 "data_offset": 2048, 00:28:18.181 "data_size": 63488 00:28:18.181 } 00:28:18.181 ] 00:28:18.181 }' 00:28:18.181 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:18.181 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.438 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:28:18.438 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:28:18.438 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:18.438 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:18.438 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:28:18.438 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:18.438 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:28:18.438 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.438 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.438 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:18.438 [2024-11-05 15:57:50.841526] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:18.696 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.696 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:18.696 "name": "Existed_Raid", 00:28:18.696 "aliases": [ 00:28:18.696 "f63aef30-6388-4288-9473-8a90e7984420" 00:28:18.696 ], 00:28:18.696 "product_name": "Raid Volume", 00:28:18.696 "block_size": 512, 00:28:18.696 "num_blocks": 63488, 00:28:18.696 "uuid": "f63aef30-6388-4288-9473-8a90e7984420", 00:28:18.696 "assigned_rate_limits": { 00:28:18.696 "rw_ios_per_sec": 0, 00:28:18.696 "rw_mbytes_per_sec": 0, 00:28:18.696 "r_mbytes_per_sec": 0, 00:28:18.696 "w_mbytes_per_sec": 0 00:28:18.696 }, 00:28:18.696 "claimed": false, 00:28:18.696 "zoned": false, 00:28:18.696 "supported_io_types": { 00:28:18.696 "read": true, 00:28:18.696 "write": true, 00:28:18.696 "unmap": false, 00:28:18.696 "flush": false, 00:28:18.696 "reset": true, 00:28:18.696 "nvme_admin": false, 00:28:18.696 "nvme_io": false, 00:28:18.696 "nvme_io_md": false, 00:28:18.696 "write_zeroes": true, 00:28:18.696 "zcopy": false, 00:28:18.696 "get_zone_info": false, 00:28:18.696 "zone_management": false, 00:28:18.696 "zone_append": false, 00:28:18.696 "compare": false, 00:28:18.696 "compare_and_write": false, 00:28:18.696 "abort": false, 00:28:18.696 "seek_hole": false, 00:28:18.696 "seek_data": false, 00:28:18.696 "copy": false, 00:28:18.696 "nvme_iov_md": false 00:28:18.696 }, 00:28:18.696 "memory_domains": [ 00:28:18.696 { 00:28:18.696 "dma_device_id": "system", 00:28:18.696 "dma_device_type": 1 00:28:18.696 }, 00:28:18.696 { 00:28:18.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:18.696 "dma_device_type": 2 00:28:18.696 }, 00:28:18.696 { 00:28:18.696 "dma_device_id": "system", 00:28:18.696 "dma_device_type": 1 00:28:18.696 }, 00:28:18.696 { 00:28:18.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:18.696 "dma_device_type": 2 00:28:18.696 }, 00:28:18.696 { 00:28:18.696 "dma_device_id": "system", 00:28:18.696 "dma_device_type": 1 00:28:18.696 }, 00:28:18.696 { 00:28:18.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:18.696 "dma_device_type": 2 00:28:18.696 }, 00:28:18.696 { 00:28:18.696 "dma_device_id": "system", 00:28:18.696 "dma_device_type": 1 00:28:18.696 }, 00:28:18.696 { 00:28:18.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:18.696 "dma_device_type": 2 00:28:18.696 } 00:28:18.696 ], 00:28:18.696 "driver_specific": { 00:28:18.696 "raid": { 00:28:18.696 "uuid": "f63aef30-6388-4288-9473-8a90e7984420", 00:28:18.696 "strip_size_kb": 0, 00:28:18.696 "state": "online", 00:28:18.696 "raid_level": "raid1", 00:28:18.696 "superblock": true, 00:28:18.696 "num_base_bdevs": 4, 00:28:18.696 "num_base_bdevs_discovered": 4, 00:28:18.696 "num_base_bdevs_operational": 4, 00:28:18.696 "base_bdevs_list": [ 00:28:18.696 { 00:28:18.696 "name": "BaseBdev1", 00:28:18.696 "uuid": "55cf8a28-37b8-49bc-a2a3-788f7695ba2e", 00:28:18.696 "is_configured": true, 00:28:18.696 "data_offset": 2048, 00:28:18.696 "data_size": 63488 00:28:18.696 }, 00:28:18.696 { 00:28:18.696 "name": "BaseBdev2", 00:28:18.696 "uuid": "e35a6ec0-800a-4d28-b2fb-c0731691670e", 00:28:18.696 "is_configured": true, 00:28:18.696 "data_offset": 2048, 00:28:18.696 "data_size": 63488 00:28:18.696 }, 00:28:18.696 { 00:28:18.696 "name": "BaseBdev3", 00:28:18.696 "uuid": "7d5d69ba-5928-4962-aa6d-47f7a6bee070", 00:28:18.696 "is_configured": true, 00:28:18.696 "data_offset": 2048, 00:28:18.696 "data_size": 63488 00:28:18.696 }, 00:28:18.696 { 00:28:18.696 "name": "BaseBdev4", 00:28:18.696 "uuid": "c526a357-14ae-492e-a0d3-8d945277359b", 00:28:18.696 "is_configured": true, 00:28:18.696 "data_offset": 2048, 00:28:18.696 "data_size": 63488 00:28:18.696 } 00:28:18.696 ] 00:28:18.696 } 00:28:18.696 } 00:28:18.696 }' 00:28:18.696 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:18.696 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:28:18.696 BaseBdev2 00:28:18.696 BaseBdev3 00:28:18.696 BaseBdev4' 00:28:18.696 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:18.696 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:28:18.696 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:18.696 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:28:18.696 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.696 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.696 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:18.696 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.696 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:18.696 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:18.696 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:18.696 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:28:18.696 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.696 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:18.696 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.696 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.696 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:18.696 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:18.696 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:18.696 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:28:18.696 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.696 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.696 15:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:18.696 15:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.696 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:18.696 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:18.696 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:18.696 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:18.696 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:28:18.696 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.696 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.696 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.696 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:18.696 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:18.696 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:28:18.696 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.696 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.696 [2024-11-05 15:57:51.041275] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:18.696 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.696 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:28:18.696 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:28:18.696 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:28:18.697 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:28:18.697 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:28:18.697 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:28:18.697 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:18.697 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:18.697 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:18.697 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:18.697 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:18.697 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:18.697 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:18.697 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:18.697 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:18.697 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:18.697 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.697 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.697 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:18.954 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.954 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:18.954 "name": "Existed_Raid", 00:28:18.954 "uuid": "f63aef30-6388-4288-9473-8a90e7984420", 00:28:18.954 "strip_size_kb": 0, 00:28:18.954 "state": "online", 00:28:18.954 "raid_level": "raid1", 00:28:18.954 "superblock": true, 00:28:18.954 "num_base_bdevs": 4, 00:28:18.954 "num_base_bdevs_discovered": 3, 00:28:18.954 "num_base_bdevs_operational": 3, 00:28:18.954 "base_bdevs_list": [ 00:28:18.954 { 00:28:18.954 "name": null, 00:28:18.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:18.954 "is_configured": false, 00:28:18.954 "data_offset": 0, 00:28:18.954 "data_size": 63488 00:28:18.954 }, 00:28:18.954 { 00:28:18.954 "name": "BaseBdev2", 00:28:18.954 "uuid": "e35a6ec0-800a-4d28-b2fb-c0731691670e", 00:28:18.954 "is_configured": true, 00:28:18.954 "data_offset": 2048, 00:28:18.954 "data_size": 63488 00:28:18.954 }, 00:28:18.954 { 00:28:18.954 "name": "BaseBdev3", 00:28:18.954 "uuid": "7d5d69ba-5928-4962-aa6d-47f7a6bee070", 00:28:18.954 "is_configured": true, 00:28:18.954 "data_offset": 2048, 00:28:18.954 "data_size": 63488 00:28:18.954 }, 00:28:18.954 { 00:28:18.954 "name": "BaseBdev4", 00:28:18.954 "uuid": "c526a357-14ae-492e-a0d3-8d945277359b", 00:28:18.954 "is_configured": true, 00:28:18.954 "data_offset": 2048, 00:28:18.954 "data_size": 63488 00:28:18.954 } 00:28:18.954 ] 00:28:18.954 }' 00:28:18.954 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:18.954 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.212 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:28:19.212 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:19.212 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:19.212 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.212 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.212 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:28:19.212 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.212 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:28:19.212 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:19.212 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:28:19.212 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.212 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.212 [2024-11-05 15:57:51.450128] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:19.212 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.212 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:28:19.212 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:19.212 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:19.212 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.212 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:28:19.212 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.212 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.212 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:28:19.212 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:19.212 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:28:19.212 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.212 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.212 [2024-11-05 15:57:51.548388] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:19.212 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.212 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:28:19.212 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:19.212 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:28:19.212 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:19.212 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.212 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.212 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.470 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:28:19.470 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:19.470 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:28:19.470 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.470 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.470 [2024-11-05 15:57:51.646198] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:28:19.470 [2024-11-05 15:57:51.646291] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:19.470 [2024-11-05 15:57:51.705677] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:19.470 [2024-11-05 15:57:51.705723] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:19.470 [2024-11-05 15:57:51.705735] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:28:19.470 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.470 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:28:19.470 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:19.470 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:19.470 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.470 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.470 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:28:19.470 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.470 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:28:19.470 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:28:19.470 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:28:19.470 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:28:19.470 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:19.470 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:28:19.470 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.470 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.470 BaseBdev2 00:28:19.470 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.470 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:28:19.470 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:28:19.470 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:28:19.470 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:28:19.470 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:28:19.470 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:28:19.470 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:28:19.470 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.471 [ 00:28:19.471 { 00:28:19.471 "name": "BaseBdev2", 00:28:19.471 "aliases": [ 00:28:19.471 "0244cd2f-6235-45d5-863c-57823920aec3" 00:28:19.471 ], 00:28:19.471 "product_name": "Malloc disk", 00:28:19.471 "block_size": 512, 00:28:19.471 "num_blocks": 65536, 00:28:19.471 "uuid": "0244cd2f-6235-45d5-863c-57823920aec3", 00:28:19.471 "assigned_rate_limits": { 00:28:19.471 "rw_ios_per_sec": 0, 00:28:19.471 "rw_mbytes_per_sec": 0, 00:28:19.471 "r_mbytes_per_sec": 0, 00:28:19.471 "w_mbytes_per_sec": 0 00:28:19.471 }, 00:28:19.471 "claimed": false, 00:28:19.471 "zoned": false, 00:28:19.471 "supported_io_types": { 00:28:19.471 "read": true, 00:28:19.471 "write": true, 00:28:19.471 "unmap": true, 00:28:19.471 "flush": true, 00:28:19.471 "reset": true, 00:28:19.471 "nvme_admin": false, 00:28:19.471 "nvme_io": false, 00:28:19.471 "nvme_io_md": false, 00:28:19.471 "write_zeroes": true, 00:28:19.471 "zcopy": true, 00:28:19.471 "get_zone_info": false, 00:28:19.471 "zone_management": false, 00:28:19.471 "zone_append": false, 00:28:19.471 "compare": false, 00:28:19.471 "compare_and_write": false, 00:28:19.471 "abort": true, 00:28:19.471 "seek_hole": false, 00:28:19.471 "seek_data": false, 00:28:19.471 "copy": true, 00:28:19.471 "nvme_iov_md": false 00:28:19.471 }, 00:28:19.471 "memory_domains": [ 00:28:19.471 { 00:28:19.471 "dma_device_id": "system", 00:28:19.471 "dma_device_type": 1 00:28:19.471 }, 00:28:19.471 { 00:28:19.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:19.471 "dma_device_type": 2 00:28:19.471 } 00:28:19.471 ], 00:28:19.471 "driver_specific": {} 00:28:19.471 } 00:28:19.471 ] 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.471 BaseBdev3 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.471 [ 00:28:19.471 { 00:28:19.471 "name": "BaseBdev3", 00:28:19.471 "aliases": [ 00:28:19.471 "181ea639-a3f9-4f7e-adb3-6ee3cef0324b" 00:28:19.471 ], 00:28:19.471 "product_name": "Malloc disk", 00:28:19.471 "block_size": 512, 00:28:19.471 "num_blocks": 65536, 00:28:19.471 "uuid": "181ea639-a3f9-4f7e-adb3-6ee3cef0324b", 00:28:19.471 "assigned_rate_limits": { 00:28:19.471 "rw_ios_per_sec": 0, 00:28:19.471 "rw_mbytes_per_sec": 0, 00:28:19.471 "r_mbytes_per_sec": 0, 00:28:19.471 "w_mbytes_per_sec": 0 00:28:19.471 }, 00:28:19.471 "claimed": false, 00:28:19.471 "zoned": false, 00:28:19.471 "supported_io_types": { 00:28:19.471 "read": true, 00:28:19.471 "write": true, 00:28:19.471 "unmap": true, 00:28:19.471 "flush": true, 00:28:19.471 "reset": true, 00:28:19.471 "nvme_admin": false, 00:28:19.471 "nvme_io": false, 00:28:19.471 "nvme_io_md": false, 00:28:19.471 "write_zeroes": true, 00:28:19.471 "zcopy": true, 00:28:19.471 "get_zone_info": false, 00:28:19.471 "zone_management": false, 00:28:19.471 "zone_append": false, 00:28:19.471 "compare": false, 00:28:19.471 "compare_and_write": false, 00:28:19.471 "abort": true, 00:28:19.471 "seek_hole": false, 00:28:19.471 "seek_data": false, 00:28:19.471 "copy": true, 00:28:19.471 "nvme_iov_md": false 00:28:19.471 }, 00:28:19.471 "memory_domains": [ 00:28:19.471 { 00:28:19.471 "dma_device_id": "system", 00:28:19.471 "dma_device_type": 1 00:28:19.471 }, 00:28:19.471 { 00:28:19.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:19.471 "dma_device_type": 2 00:28:19.471 } 00:28:19.471 ], 00:28:19.471 "driver_specific": {} 00:28:19.471 } 00:28:19.471 ] 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.471 BaseBdev4 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.471 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.729 [ 00:28:19.729 { 00:28:19.729 "name": "BaseBdev4", 00:28:19.729 "aliases": [ 00:28:19.729 "3d0cbe5f-a956-4487-a3be-d3000cbc64d2" 00:28:19.729 ], 00:28:19.729 "product_name": "Malloc disk", 00:28:19.729 "block_size": 512, 00:28:19.729 "num_blocks": 65536, 00:28:19.729 "uuid": "3d0cbe5f-a956-4487-a3be-d3000cbc64d2", 00:28:19.729 "assigned_rate_limits": { 00:28:19.729 "rw_ios_per_sec": 0, 00:28:19.729 "rw_mbytes_per_sec": 0, 00:28:19.729 "r_mbytes_per_sec": 0, 00:28:19.729 "w_mbytes_per_sec": 0 00:28:19.729 }, 00:28:19.729 "claimed": false, 00:28:19.729 "zoned": false, 00:28:19.729 "supported_io_types": { 00:28:19.729 "read": true, 00:28:19.729 "write": true, 00:28:19.729 "unmap": true, 00:28:19.729 "flush": true, 00:28:19.729 "reset": true, 00:28:19.729 "nvme_admin": false, 00:28:19.729 "nvme_io": false, 00:28:19.729 "nvme_io_md": false, 00:28:19.729 "write_zeroes": true, 00:28:19.729 "zcopy": true, 00:28:19.729 "get_zone_info": false, 00:28:19.729 "zone_management": false, 00:28:19.729 "zone_append": false, 00:28:19.729 "compare": false, 00:28:19.729 "compare_and_write": false, 00:28:19.729 "abort": true, 00:28:19.729 "seek_hole": false, 00:28:19.729 "seek_data": false, 00:28:19.729 "copy": true, 00:28:19.729 "nvme_iov_md": false 00:28:19.729 }, 00:28:19.729 "memory_domains": [ 00:28:19.729 { 00:28:19.729 "dma_device_id": "system", 00:28:19.729 "dma_device_type": 1 00:28:19.729 }, 00:28:19.729 { 00:28:19.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:19.729 "dma_device_type": 2 00:28:19.729 } 00:28:19.729 ], 00:28:19.729 "driver_specific": {} 00:28:19.729 } 00:28:19.729 ] 00:28:19.729 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.729 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:28:19.729 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:28:19.729 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:19.729 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:28:19.729 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.729 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.729 [2024-11-05 15:57:51.895999] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:19.729 [2024-11-05 15:57:51.896043] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:19.729 [2024-11-05 15:57:51.896060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:19.729 [2024-11-05 15:57:51.897902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:19.729 [2024-11-05 15:57:51.897950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:19.729 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.729 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:19.729 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:19.729 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:19.729 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:19.729 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:19.729 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:19.729 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:19.729 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:19.729 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:19.729 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:19.729 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:19.729 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.729 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:19.729 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.729 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.729 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:19.729 "name": "Existed_Raid", 00:28:19.729 "uuid": "240c8646-7b05-420c-b567-0e1eb4c2a934", 00:28:19.729 "strip_size_kb": 0, 00:28:19.729 "state": "configuring", 00:28:19.729 "raid_level": "raid1", 00:28:19.729 "superblock": true, 00:28:19.729 "num_base_bdevs": 4, 00:28:19.729 "num_base_bdevs_discovered": 3, 00:28:19.729 "num_base_bdevs_operational": 4, 00:28:19.729 "base_bdevs_list": [ 00:28:19.729 { 00:28:19.729 "name": "BaseBdev1", 00:28:19.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:19.729 "is_configured": false, 00:28:19.729 "data_offset": 0, 00:28:19.729 "data_size": 0 00:28:19.729 }, 00:28:19.729 { 00:28:19.729 "name": "BaseBdev2", 00:28:19.729 "uuid": "0244cd2f-6235-45d5-863c-57823920aec3", 00:28:19.729 "is_configured": true, 00:28:19.729 "data_offset": 2048, 00:28:19.729 "data_size": 63488 00:28:19.729 }, 00:28:19.729 { 00:28:19.729 "name": "BaseBdev3", 00:28:19.729 "uuid": "181ea639-a3f9-4f7e-adb3-6ee3cef0324b", 00:28:19.729 "is_configured": true, 00:28:19.729 "data_offset": 2048, 00:28:19.729 "data_size": 63488 00:28:19.729 }, 00:28:19.729 { 00:28:19.729 "name": "BaseBdev4", 00:28:19.729 "uuid": "3d0cbe5f-a956-4487-a3be-d3000cbc64d2", 00:28:19.729 "is_configured": true, 00:28:19.729 "data_offset": 2048, 00:28:19.729 "data_size": 63488 00:28:19.729 } 00:28:19.729 ] 00:28:19.729 }' 00:28:19.729 15:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:19.729 15:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.986 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:28:19.986 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.986 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.986 [2024-11-05 15:57:52.232107] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:19.986 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.986 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:19.986 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:19.986 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:19.986 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:19.986 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:19.986 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:19.986 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:19.986 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:19.986 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:19.986 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:19.986 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:19.986 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:19.986 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.986 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.986 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.986 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:19.986 "name": "Existed_Raid", 00:28:19.986 "uuid": "240c8646-7b05-420c-b567-0e1eb4c2a934", 00:28:19.986 "strip_size_kb": 0, 00:28:19.986 "state": "configuring", 00:28:19.986 "raid_level": "raid1", 00:28:19.986 "superblock": true, 00:28:19.986 "num_base_bdevs": 4, 00:28:19.986 "num_base_bdevs_discovered": 2, 00:28:19.986 "num_base_bdevs_operational": 4, 00:28:19.986 "base_bdevs_list": [ 00:28:19.986 { 00:28:19.986 "name": "BaseBdev1", 00:28:19.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:19.986 "is_configured": false, 00:28:19.986 "data_offset": 0, 00:28:19.986 "data_size": 0 00:28:19.986 }, 00:28:19.986 { 00:28:19.986 "name": null, 00:28:19.987 "uuid": "0244cd2f-6235-45d5-863c-57823920aec3", 00:28:19.987 "is_configured": false, 00:28:19.987 "data_offset": 0, 00:28:19.987 "data_size": 63488 00:28:19.987 }, 00:28:19.987 { 00:28:19.987 "name": "BaseBdev3", 00:28:19.987 "uuid": "181ea639-a3f9-4f7e-adb3-6ee3cef0324b", 00:28:19.987 "is_configured": true, 00:28:19.987 "data_offset": 2048, 00:28:19.987 "data_size": 63488 00:28:19.987 }, 00:28:19.987 { 00:28:19.987 "name": "BaseBdev4", 00:28:19.987 "uuid": "3d0cbe5f-a956-4487-a3be-d3000cbc64d2", 00:28:19.987 "is_configured": true, 00:28:19.987 "data_offset": 2048, 00:28:19.987 "data_size": 63488 00:28:19.987 } 00:28:19.987 ] 00:28:19.987 }' 00:28:19.987 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:19.987 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:20.245 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:20.245 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:20.245 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.245 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:20.245 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.245 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:28:20.245 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:28:20.245 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.245 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:20.245 [2024-11-05 15:57:52.598208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:20.245 BaseBdev1 00:28:20.245 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.245 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:28:20.245 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:28:20.245 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:28:20.245 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:28:20.245 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:28:20.245 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:28:20.245 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:28:20.245 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.245 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:20.245 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.245 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:20.245 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.245 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:20.245 [ 00:28:20.245 { 00:28:20.245 "name": "BaseBdev1", 00:28:20.245 "aliases": [ 00:28:20.245 "906a3fe2-9db5-441c-bfae-c745cbe35c2d" 00:28:20.245 ], 00:28:20.245 "product_name": "Malloc disk", 00:28:20.245 "block_size": 512, 00:28:20.245 "num_blocks": 65536, 00:28:20.245 "uuid": "906a3fe2-9db5-441c-bfae-c745cbe35c2d", 00:28:20.245 "assigned_rate_limits": { 00:28:20.245 "rw_ios_per_sec": 0, 00:28:20.245 "rw_mbytes_per_sec": 0, 00:28:20.245 "r_mbytes_per_sec": 0, 00:28:20.245 "w_mbytes_per_sec": 0 00:28:20.245 }, 00:28:20.245 "claimed": true, 00:28:20.245 "claim_type": "exclusive_write", 00:28:20.245 "zoned": false, 00:28:20.245 "supported_io_types": { 00:28:20.245 "read": true, 00:28:20.245 "write": true, 00:28:20.245 "unmap": true, 00:28:20.245 "flush": true, 00:28:20.245 "reset": true, 00:28:20.245 "nvme_admin": false, 00:28:20.245 "nvme_io": false, 00:28:20.245 "nvme_io_md": false, 00:28:20.245 "write_zeroes": true, 00:28:20.245 "zcopy": true, 00:28:20.245 "get_zone_info": false, 00:28:20.245 "zone_management": false, 00:28:20.245 "zone_append": false, 00:28:20.245 "compare": false, 00:28:20.245 "compare_and_write": false, 00:28:20.245 "abort": true, 00:28:20.245 "seek_hole": false, 00:28:20.245 "seek_data": false, 00:28:20.245 "copy": true, 00:28:20.245 "nvme_iov_md": false 00:28:20.245 }, 00:28:20.245 "memory_domains": [ 00:28:20.245 { 00:28:20.245 "dma_device_id": "system", 00:28:20.245 "dma_device_type": 1 00:28:20.245 }, 00:28:20.245 { 00:28:20.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:20.245 "dma_device_type": 2 00:28:20.245 } 00:28:20.245 ], 00:28:20.245 "driver_specific": {} 00:28:20.245 } 00:28:20.245 ] 00:28:20.245 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.245 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:28:20.245 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:20.245 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:20.246 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:20.246 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:20.246 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:20.246 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:20.246 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:20.246 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:20.246 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:20.246 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:20.246 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:20.246 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.246 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:20.246 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:20.246 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.503 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:20.503 "name": "Existed_Raid", 00:28:20.503 "uuid": "240c8646-7b05-420c-b567-0e1eb4c2a934", 00:28:20.503 "strip_size_kb": 0, 00:28:20.503 "state": "configuring", 00:28:20.503 "raid_level": "raid1", 00:28:20.503 "superblock": true, 00:28:20.503 "num_base_bdevs": 4, 00:28:20.503 "num_base_bdevs_discovered": 3, 00:28:20.503 "num_base_bdevs_operational": 4, 00:28:20.503 "base_bdevs_list": [ 00:28:20.503 { 00:28:20.503 "name": "BaseBdev1", 00:28:20.503 "uuid": "906a3fe2-9db5-441c-bfae-c745cbe35c2d", 00:28:20.503 "is_configured": true, 00:28:20.503 "data_offset": 2048, 00:28:20.503 "data_size": 63488 00:28:20.503 }, 00:28:20.503 { 00:28:20.503 "name": null, 00:28:20.503 "uuid": "0244cd2f-6235-45d5-863c-57823920aec3", 00:28:20.503 "is_configured": false, 00:28:20.503 "data_offset": 0, 00:28:20.503 "data_size": 63488 00:28:20.503 }, 00:28:20.503 { 00:28:20.503 "name": "BaseBdev3", 00:28:20.503 "uuid": "181ea639-a3f9-4f7e-adb3-6ee3cef0324b", 00:28:20.503 "is_configured": true, 00:28:20.503 "data_offset": 2048, 00:28:20.503 "data_size": 63488 00:28:20.503 }, 00:28:20.503 { 00:28:20.503 "name": "BaseBdev4", 00:28:20.503 "uuid": "3d0cbe5f-a956-4487-a3be-d3000cbc64d2", 00:28:20.503 "is_configured": true, 00:28:20.503 "data_offset": 2048, 00:28:20.503 "data_size": 63488 00:28:20.503 } 00:28:20.503 ] 00:28:20.503 }' 00:28:20.503 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:20.503 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:20.761 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:20.761 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:20.761 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.761 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:20.761 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.761 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:28:20.761 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:28:20.761 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.761 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:20.761 [2024-11-05 15:57:52.962358] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:20.761 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.761 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:20.761 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:20.761 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:20.761 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:20.761 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:20.761 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:20.761 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:20.761 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:20.761 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:20.761 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:20.761 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:20.761 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.761 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:20.761 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:20.761 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.761 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:20.761 "name": "Existed_Raid", 00:28:20.761 "uuid": "240c8646-7b05-420c-b567-0e1eb4c2a934", 00:28:20.761 "strip_size_kb": 0, 00:28:20.761 "state": "configuring", 00:28:20.761 "raid_level": "raid1", 00:28:20.761 "superblock": true, 00:28:20.761 "num_base_bdevs": 4, 00:28:20.761 "num_base_bdevs_discovered": 2, 00:28:20.761 "num_base_bdevs_operational": 4, 00:28:20.761 "base_bdevs_list": [ 00:28:20.761 { 00:28:20.761 "name": "BaseBdev1", 00:28:20.761 "uuid": "906a3fe2-9db5-441c-bfae-c745cbe35c2d", 00:28:20.761 "is_configured": true, 00:28:20.761 "data_offset": 2048, 00:28:20.761 "data_size": 63488 00:28:20.761 }, 00:28:20.761 { 00:28:20.761 "name": null, 00:28:20.761 "uuid": "0244cd2f-6235-45d5-863c-57823920aec3", 00:28:20.761 "is_configured": false, 00:28:20.761 "data_offset": 0, 00:28:20.761 "data_size": 63488 00:28:20.761 }, 00:28:20.761 { 00:28:20.761 "name": null, 00:28:20.761 "uuid": "181ea639-a3f9-4f7e-adb3-6ee3cef0324b", 00:28:20.761 "is_configured": false, 00:28:20.761 "data_offset": 0, 00:28:20.761 "data_size": 63488 00:28:20.761 }, 00:28:20.761 { 00:28:20.761 "name": "BaseBdev4", 00:28:20.761 "uuid": "3d0cbe5f-a956-4487-a3be-d3000cbc64d2", 00:28:20.761 "is_configured": true, 00:28:20.761 "data_offset": 2048, 00:28:20.761 "data_size": 63488 00:28:20.761 } 00:28:20.761 ] 00:28:20.761 }' 00:28:20.761 15:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:20.761 15:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:21.019 15:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:21.019 15:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:21.019 15:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.019 15:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:21.019 15:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.019 15:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:28:21.019 15:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:28:21.019 15:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.019 15:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:21.019 [2024-11-05 15:57:53.286442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:21.019 15:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.019 15:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:21.019 15:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:21.019 15:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:21.019 15:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:21.019 15:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:21.019 15:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:21.019 15:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:21.019 15:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:21.019 15:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:21.019 15:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:21.019 15:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:21.019 15:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.019 15:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:21.019 15:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:21.019 15:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.019 15:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:21.019 "name": "Existed_Raid", 00:28:21.019 "uuid": "240c8646-7b05-420c-b567-0e1eb4c2a934", 00:28:21.019 "strip_size_kb": 0, 00:28:21.019 "state": "configuring", 00:28:21.019 "raid_level": "raid1", 00:28:21.019 "superblock": true, 00:28:21.019 "num_base_bdevs": 4, 00:28:21.019 "num_base_bdevs_discovered": 3, 00:28:21.019 "num_base_bdevs_operational": 4, 00:28:21.019 "base_bdevs_list": [ 00:28:21.019 { 00:28:21.019 "name": "BaseBdev1", 00:28:21.019 "uuid": "906a3fe2-9db5-441c-bfae-c745cbe35c2d", 00:28:21.019 "is_configured": true, 00:28:21.019 "data_offset": 2048, 00:28:21.019 "data_size": 63488 00:28:21.019 }, 00:28:21.019 { 00:28:21.019 "name": null, 00:28:21.019 "uuid": "0244cd2f-6235-45d5-863c-57823920aec3", 00:28:21.019 "is_configured": false, 00:28:21.019 "data_offset": 0, 00:28:21.019 "data_size": 63488 00:28:21.019 }, 00:28:21.019 { 00:28:21.019 "name": "BaseBdev3", 00:28:21.019 "uuid": "181ea639-a3f9-4f7e-adb3-6ee3cef0324b", 00:28:21.019 "is_configured": true, 00:28:21.019 "data_offset": 2048, 00:28:21.019 "data_size": 63488 00:28:21.019 }, 00:28:21.019 { 00:28:21.019 "name": "BaseBdev4", 00:28:21.019 "uuid": "3d0cbe5f-a956-4487-a3be-d3000cbc64d2", 00:28:21.019 "is_configured": true, 00:28:21.019 "data_offset": 2048, 00:28:21.019 "data_size": 63488 00:28:21.019 } 00:28:21.019 ] 00:28:21.019 }' 00:28:21.019 15:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:21.019 15:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:21.278 15:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:21.278 15:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:21.278 15:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.278 15:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:21.278 15:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.278 15:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:28:21.278 15:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:28:21.278 15:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.278 15:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:21.278 [2024-11-05 15:57:53.634557] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:21.536 15:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.536 15:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:21.536 15:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:21.536 15:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:21.536 15:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:21.536 15:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:21.536 15:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:21.536 15:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:21.536 15:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:21.536 15:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:21.536 15:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:21.536 15:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:21.536 15:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:21.536 15:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.536 15:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:21.536 15:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.536 15:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:21.536 "name": "Existed_Raid", 00:28:21.536 "uuid": "240c8646-7b05-420c-b567-0e1eb4c2a934", 00:28:21.536 "strip_size_kb": 0, 00:28:21.536 "state": "configuring", 00:28:21.536 "raid_level": "raid1", 00:28:21.536 "superblock": true, 00:28:21.536 "num_base_bdevs": 4, 00:28:21.536 "num_base_bdevs_discovered": 2, 00:28:21.536 "num_base_bdevs_operational": 4, 00:28:21.536 "base_bdevs_list": [ 00:28:21.536 { 00:28:21.536 "name": null, 00:28:21.536 "uuid": "906a3fe2-9db5-441c-bfae-c745cbe35c2d", 00:28:21.536 "is_configured": false, 00:28:21.536 "data_offset": 0, 00:28:21.536 "data_size": 63488 00:28:21.536 }, 00:28:21.536 { 00:28:21.536 "name": null, 00:28:21.536 "uuid": "0244cd2f-6235-45d5-863c-57823920aec3", 00:28:21.536 "is_configured": false, 00:28:21.536 "data_offset": 0, 00:28:21.536 "data_size": 63488 00:28:21.536 }, 00:28:21.536 { 00:28:21.536 "name": "BaseBdev3", 00:28:21.536 "uuid": "181ea639-a3f9-4f7e-adb3-6ee3cef0324b", 00:28:21.536 "is_configured": true, 00:28:21.536 "data_offset": 2048, 00:28:21.536 "data_size": 63488 00:28:21.536 }, 00:28:21.536 { 00:28:21.536 "name": "BaseBdev4", 00:28:21.536 "uuid": "3d0cbe5f-a956-4487-a3be-d3000cbc64d2", 00:28:21.536 "is_configured": true, 00:28:21.536 "data_offset": 2048, 00:28:21.536 "data_size": 63488 00:28:21.536 } 00:28:21.536 ] 00:28:21.536 }' 00:28:21.536 15:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:21.536 15:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:21.794 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:21.794 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.794 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:21.794 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:21.794 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.794 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:28:21.794 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:28:21.794 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.794 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:21.794 [2024-11-05 15:57:54.029582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:21.794 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.794 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:21.794 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:21.794 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:21.794 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:21.794 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:21.794 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:21.794 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:21.794 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:21.794 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:21.794 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:21.794 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:21.794 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.794 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:21.794 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:21.794 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.794 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:21.794 "name": "Existed_Raid", 00:28:21.794 "uuid": "240c8646-7b05-420c-b567-0e1eb4c2a934", 00:28:21.794 "strip_size_kb": 0, 00:28:21.794 "state": "configuring", 00:28:21.794 "raid_level": "raid1", 00:28:21.794 "superblock": true, 00:28:21.794 "num_base_bdevs": 4, 00:28:21.794 "num_base_bdevs_discovered": 3, 00:28:21.794 "num_base_bdevs_operational": 4, 00:28:21.794 "base_bdevs_list": [ 00:28:21.794 { 00:28:21.794 "name": null, 00:28:21.794 "uuid": "906a3fe2-9db5-441c-bfae-c745cbe35c2d", 00:28:21.794 "is_configured": false, 00:28:21.794 "data_offset": 0, 00:28:21.794 "data_size": 63488 00:28:21.794 }, 00:28:21.794 { 00:28:21.794 "name": "BaseBdev2", 00:28:21.795 "uuid": "0244cd2f-6235-45d5-863c-57823920aec3", 00:28:21.795 "is_configured": true, 00:28:21.795 "data_offset": 2048, 00:28:21.795 "data_size": 63488 00:28:21.795 }, 00:28:21.795 { 00:28:21.795 "name": "BaseBdev3", 00:28:21.795 "uuid": "181ea639-a3f9-4f7e-adb3-6ee3cef0324b", 00:28:21.795 "is_configured": true, 00:28:21.795 "data_offset": 2048, 00:28:21.795 "data_size": 63488 00:28:21.795 }, 00:28:21.795 { 00:28:21.795 "name": "BaseBdev4", 00:28:21.795 "uuid": "3d0cbe5f-a956-4487-a3be-d3000cbc64d2", 00:28:21.795 "is_configured": true, 00:28:21.795 "data_offset": 2048, 00:28:21.795 "data_size": 63488 00:28:21.795 } 00:28:21.795 ] 00:28:21.795 }' 00:28:21.795 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:21.795 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:22.072 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:22.072 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.072 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:22.072 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:22.072 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.072 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:28:22.072 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:28:22.072 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:22.072 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.072 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:22.072 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.072 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 906a3fe2-9db5-441c-bfae-c745cbe35c2d 00:28:22.072 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.072 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:22.072 [2024-11-05 15:57:54.411373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:28:22.072 [2024-11-05 15:57:54.411564] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:28:22.072 [2024-11-05 15:57:54.411579] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:22.072 [2024-11-05 15:57:54.411820] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:28:22.072 [2024-11-05 15:57:54.411984] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:28:22.072 [2024-11-05 15:57:54.411993] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:28:22.072 [2024-11-05 15:57:54.412109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:22.072 NewBaseBdev 00:28:22.072 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.072 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:28:22.072 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:28:22.072 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:28:22.072 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:28:22.072 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:28:22.072 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:28:22.072 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:28:22.072 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.072 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:22.072 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.072 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:28:22.072 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.072 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:22.072 [ 00:28:22.072 { 00:28:22.072 "name": "NewBaseBdev", 00:28:22.072 "aliases": [ 00:28:22.072 "906a3fe2-9db5-441c-bfae-c745cbe35c2d" 00:28:22.072 ], 00:28:22.072 "product_name": "Malloc disk", 00:28:22.072 "block_size": 512, 00:28:22.072 "num_blocks": 65536, 00:28:22.072 "uuid": "906a3fe2-9db5-441c-bfae-c745cbe35c2d", 00:28:22.072 "assigned_rate_limits": { 00:28:22.072 "rw_ios_per_sec": 0, 00:28:22.072 "rw_mbytes_per_sec": 0, 00:28:22.072 "r_mbytes_per_sec": 0, 00:28:22.072 "w_mbytes_per_sec": 0 00:28:22.072 }, 00:28:22.072 "claimed": true, 00:28:22.072 "claim_type": "exclusive_write", 00:28:22.072 "zoned": false, 00:28:22.072 "supported_io_types": { 00:28:22.072 "read": true, 00:28:22.072 "write": true, 00:28:22.072 "unmap": true, 00:28:22.072 "flush": true, 00:28:22.072 "reset": true, 00:28:22.072 "nvme_admin": false, 00:28:22.072 "nvme_io": false, 00:28:22.072 "nvme_io_md": false, 00:28:22.072 "write_zeroes": true, 00:28:22.072 "zcopy": true, 00:28:22.072 "get_zone_info": false, 00:28:22.072 "zone_management": false, 00:28:22.072 "zone_append": false, 00:28:22.072 "compare": false, 00:28:22.072 "compare_and_write": false, 00:28:22.072 "abort": true, 00:28:22.073 "seek_hole": false, 00:28:22.073 "seek_data": false, 00:28:22.073 "copy": true, 00:28:22.073 "nvme_iov_md": false 00:28:22.073 }, 00:28:22.073 "memory_domains": [ 00:28:22.073 { 00:28:22.073 "dma_device_id": "system", 00:28:22.073 "dma_device_type": 1 00:28:22.073 }, 00:28:22.073 { 00:28:22.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:22.073 "dma_device_type": 2 00:28:22.073 } 00:28:22.073 ], 00:28:22.073 "driver_specific": {} 00:28:22.073 } 00:28:22.073 ] 00:28:22.073 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.073 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:28:22.073 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:28:22.073 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:22.073 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:22.073 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:22.073 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:22.073 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:22.073 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:22.073 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:22.073 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:22.073 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:22.073 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:22.073 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:22.073 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.073 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:22.073 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.073 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:22.073 "name": "Existed_Raid", 00:28:22.073 "uuid": "240c8646-7b05-420c-b567-0e1eb4c2a934", 00:28:22.073 "strip_size_kb": 0, 00:28:22.073 "state": "online", 00:28:22.073 "raid_level": "raid1", 00:28:22.073 "superblock": true, 00:28:22.073 "num_base_bdevs": 4, 00:28:22.073 "num_base_bdevs_discovered": 4, 00:28:22.073 "num_base_bdevs_operational": 4, 00:28:22.073 "base_bdevs_list": [ 00:28:22.073 { 00:28:22.073 "name": "NewBaseBdev", 00:28:22.073 "uuid": "906a3fe2-9db5-441c-bfae-c745cbe35c2d", 00:28:22.073 "is_configured": true, 00:28:22.073 "data_offset": 2048, 00:28:22.073 "data_size": 63488 00:28:22.073 }, 00:28:22.073 { 00:28:22.073 "name": "BaseBdev2", 00:28:22.073 "uuid": "0244cd2f-6235-45d5-863c-57823920aec3", 00:28:22.073 "is_configured": true, 00:28:22.073 "data_offset": 2048, 00:28:22.073 "data_size": 63488 00:28:22.073 }, 00:28:22.073 { 00:28:22.073 "name": "BaseBdev3", 00:28:22.073 "uuid": "181ea639-a3f9-4f7e-adb3-6ee3cef0324b", 00:28:22.073 "is_configured": true, 00:28:22.073 "data_offset": 2048, 00:28:22.073 "data_size": 63488 00:28:22.073 }, 00:28:22.073 { 00:28:22.073 "name": "BaseBdev4", 00:28:22.073 "uuid": "3d0cbe5f-a956-4487-a3be-d3000cbc64d2", 00:28:22.073 "is_configured": true, 00:28:22.073 "data_offset": 2048, 00:28:22.073 "data_size": 63488 00:28:22.073 } 00:28:22.073 ] 00:28:22.073 }' 00:28:22.073 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:22.073 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:22.330 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:28:22.330 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:28:22.330 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:22.330 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:22.330 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:28:22.330 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:22.330 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:22.330 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:28:22.330 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.330 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:22.330 [2024-11-05 15:57:54.743870] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:22.589 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.589 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:22.589 "name": "Existed_Raid", 00:28:22.589 "aliases": [ 00:28:22.589 "240c8646-7b05-420c-b567-0e1eb4c2a934" 00:28:22.589 ], 00:28:22.589 "product_name": "Raid Volume", 00:28:22.589 "block_size": 512, 00:28:22.589 "num_blocks": 63488, 00:28:22.589 "uuid": "240c8646-7b05-420c-b567-0e1eb4c2a934", 00:28:22.589 "assigned_rate_limits": { 00:28:22.589 "rw_ios_per_sec": 0, 00:28:22.589 "rw_mbytes_per_sec": 0, 00:28:22.589 "r_mbytes_per_sec": 0, 00:28:22.589 "w_mbytes_per_sec": 0 00:28:22.589 }, 00:28:22.589 "claimed": false, 00:28:22.589 "zoned": false, 00:28:22.589 "supported_io_types": { 00:28:22.589 "read": true, 00:28:22.589 "write": true, 00:28:22.589 "unmap": false, 00:28:22.589 "flush": false, 00:28:22.589 "reset": true, 00:28:22.589 "nvme_admin": false, 00:28:22.589 "nvme_io": false, 00:28:22.589 "nvme_io_md": false, 00:28:22.589 "write_zeroes": true, 00:28:22.589 "zcopy": false, 00:28:22.589 "get_zone_info": false, 00:28:22.589 "zone_management": false, 00:28:22.589 "zone_append": false, 00:28:22.589 "compare": false, 00:28:22.589 "compare_and_write": false, 00:28:22.589 "abort": false, 00:28:22.589 "seek_hole": false, 00:28:22.589 "seek_data": false, 00:28:22.589 "copy": false, 00:28:22.589 "nvme_iov_md": false 00:28:22.589 }, 00:28:22.589 "memory_domains": [ 00:28:22.589 { 00:28:22.589 "dma_device_id": "system", 00:28:22.589 "dma_device_type": 1 00:28:22.589 }, 00:28:22.589 { 00:28:22.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:22.589 "dma_device_type": 2 00:28:22.589 }, 00:28:22.589 { 00:28:22.589 "dma_device_id": "system", 00:28:22.589 "dma_device_type": 1 00:28:22.589 }, 00:28:22.589 { 00:28:22.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:22.589 "dma_device_type": 2 00:28:22.589 }, 00:28:22.589 { 00:28:22.589 "dma_device_id": "system", 00:28:22.589 "dma_device_type": 1 00:28:22.589 }, 00:28:22.589 { 00:28:22.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:22.589 "dma_device_type": 2 00:28:22.589 }, 00:28:22.589 { 00:28:22.589 "dma_device_id": "system", 00:28:22.589 "dma_device_type": 1 00:28:22.589 }, 00:28:22.589 { 00:28:22.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:22.589 "dma_device_type": 2 00:28:22.589 } 00:28:22.589 ], 00:28:22.589 "driver_specific": { 00:28:22.589 "raid": { 00:28:22.589 "uuid": "240c8646-7b05-420c-b567-0e1eb4c2a934", 00:28:22.589 "strip_size_kb": 0, 00:28:22.589 "state": "online", 00:28:22.589 "raid_level": "raid1", 00:28:22.589 "superblock": true, 00:28:22.589 "num_base_bdevs": 4, 00:28:22.589 "num_base_bdevs_discovered": 4, 00:28:22.589 "num_base_bdevs_operational": 4, 00:28:22.589 "base_bdevs_list": [ 00:28:22.589 { 00:28:22.589 "name": "NewBaseBdev", 00:28:22.589 "uuid": "906a3fe2-9db5-441c-bfae-c745cbe35c2d", 00:28:22.589 "is_configured": true, 00:28:22.589 "data_offset": 2048, 00:28:22.589 "data_size": 63488 00:28:22.589 }, 00:28:22.589 { 00:28:22.589 "name": "BaseBdev2", 00:28:22.589 "uuid": "0244cd2f-6235-45d5-863c-57823920aec3", 00:28:22.589 "is_configured": true, 00:28:22.589 "data_offset": 2048, 00:28:22.589 "data_size": 63488 00:28:22.589 }, 00:28:22.589 { 00:28:22.589 "name": "BaseBdev3", 00:28:22.589 "uuid": "181ea639-a3f9-4f7e-adb3-6ee3cef0324b", 00:28:22.589 "is_configured": true, 00:28:22.589 "data_offset": 2048, 00:28:22.589 "data_size": 63488 00:28:22.589 }, 00:28:22.589 { 00:28:22.589 "name": "BaseBdev4", 00:28:22.589 "uuid": "3d0cbe5f-a956-4487-a3be-d3000cbc64d2", 00:28:22.589 "is_configured": true, 00:28:22.589 "data_offset": 2048, 00:28:22.589 "data_size": 63488 00:28:22.589 } 00:28:22.589 ] 00:28:22.589 } 00:28:22.589 } 00:28:22.589 }' 00:28:22.589 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:22.589 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:28:22.589 BaseBdev2 00:28:22.589 BaseBdev3 00:28:22.589 BaseBdev4' 00:28:22.589 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:22.589 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:28:22.589 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:22.589 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:28:22.589 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.589 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:22.589 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:22.589 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.589 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:22.589 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:22.589 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:22.589 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:22.589 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:28:22.589 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.589 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:22.589 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.589 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:22.589 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:22.589 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:22.589 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:28:22.589 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:22.589 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.589 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:22.589 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.589 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:22.589 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:22.589 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:22.589 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:28:22.589 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.589 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:22.589 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:22.590 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.590 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:22.590 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:22.590 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:28:22.590 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.590 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:22.590 [2024-11-05 15:57:54.951527] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:22.590 [2024-11-05 15:57:54.951552] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:22.590 [2024-11-05 15:57:54.951609] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:22.590 [2024-11-05 15:57:54.951901] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:22.590 [2024-11-05 15:57:54.951920] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:28:22.590 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.590 15:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71664 00:28:22.590 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 71664 ']' 00:28:22.590 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 71664 00:28:22.590 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:28:22.590 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:22.590 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71664 00:28:22.590 killing process with pid 71664 00:28:22.590 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:22.590 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:22.590 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71664' 00:28:22.590 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 71664 00:28:22.590 [2024-11-05 15:57:54.976221] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:22.590 15:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 71664 00:28:22.847 [2024-11-05 15:57:55.221125] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:23.780 ************************************ 00:28:23.780 END TEST raid_state_function_test_sb 00:28:23.780 ************************************ 00:28:23.780 15:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:28:23.780 00:28:23.780 real 0m8.182s 00:28:23.780 user 0m13.128s 00:28:23.780 sys 0m1.201s 00:28:23.780 15:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:23.780 15:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:23.780 15:57:55 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:28:23.780 15:57:55 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:28:23.780 15:57:55 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:23.780 15:57:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:23.780 ************************************ 00:28:23.780 START TEST raid_superblock_test 00:28:23.780 ************************************ 00:28:23.780 15:57:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 4 00:28:23.780 15:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:28:23.780 15:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:28:23.780 15:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:28:23.780 15:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:28:23.780 15:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:28:23.780 15:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:28:23.780 15:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:28:23.780 15:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:28:23.780 15:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:28:23.780 15:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:28:23.780 15:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:28:23.780 15:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:28:23.780 15:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:28:23.780 15:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:28:23.780 15:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:28:23.780 15:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72307 00:28:23.780 15:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72307 00:28:23.780 15:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:28:23.780 15:57:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 72307 ']' 00:28:23.780 15:57:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:23.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:23.780 15:57:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:23.780 15:57:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:23.780 15:57:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:23.780 15:57:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:23.780 [2024-11-05 15:57:56.030394] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:28:23.780 [2024-11-05 15:57:56.030525] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72307 ] 00:28:23.780 [2024-11-05 15:57:56.185283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.037 [2024-11-05 15:57:56.277890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.037 [2024-11-05 15:57:56.411912] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:24.037 [2024-11-05 15:57:56.411942] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:24.603 15:57:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:24.603 15:57:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:28:24.603 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:28:24.603 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:24.603 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:28:24.603 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:28:24.603 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:28:24.603 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:24.603 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:28:24.603 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:24.603 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:28:24.603 15:57:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.603 15:57:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.603 malloc1 00:28:24.603 15:57:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.603 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:24.603 15:57:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.603 15:57:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.603 [2024-11-05 15:57:56.902108] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:24.603 [2024-11-05 15:57:56.902164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:24.603 [2024-11-05 15:57:56.902184] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:28:24.603 [2024-11-05 15:57:56.902194] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:24.603 [2024-11-05 15:57:56.904346] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:24.603 [2024-11-05 15:57:56.904383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:24.603 pt1 00:28:24.603 15:57:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.604 malloc2 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.604 [2024-11-05 15:57:56.937660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:24.604 [2024-11-05 15:57:56.937703] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:24.604 [2024-11-05 15:57:56.937721] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:28:24.604 [2024-11-05 15:57:56.937729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:24.604 [2024-11-05 15:57:56.939802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:24.604 [2024-11-05 15:57:56.939836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:24.604 pt2 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.604 malloc3 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.604 [2024-11-05 15:57:56.994724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:24.604 [2024-11-05 15:57:56.994772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:24.604 [2024-11-05 15:57:56.994793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:28:24.604 [2024-11-05 15:57:56.994802] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:24.604 [2024-11-05 15:57:56.996873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:24.604 [2024-11-05 15:57:56.996904] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:24.604 pt3 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.604 15:57:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.861 malloc4 00:28:24.861 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.861 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:28:24.862 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.862 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.862 [2024-11-05 15:57:57.039206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:28:24.862 [2024-11-05 15:57:57.039268] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:24.862 [2024-11-05 15:57:57.039287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:28:24.862 [2024-11-05 15:57:57.039299] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:24.862 [2024-11-05 15:57:57.041478] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:24.862 [2024-11-05 15:57:57.041512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:28:24.862 pt4 00:28:24.862 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.862 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:28:24.862 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:24.862 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:28:24.862 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.862 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.862 [2024-11-05 15:57:57.047237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:24.862 [2024-11-05 15:57:57.049181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:24.862 [2024-11-05 15:57:57.049246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:24.862 [2024-11-05 15:57:57.049292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:28:24.862 [2024-11-05 15:57:57.049479] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:28:24.862 [2024-11-05 15:57:57.049500] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:24.862 [2024-11-05 15:57:57.049778] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:28:24.862 [2024-11-05 15:57:57.049979] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:28:24.862 [2024-11-05 15:57:57.049998] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:28:24.862 [2024-11-05 15:57:57.050140] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:24.862 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.862 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:28:24.862 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:24.862 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:24.862 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:24.862 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:24.862 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:24.862 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:24.862 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:24.862 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:24.862 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:24.862 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:24.862 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.862 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.862 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:24.862 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.862 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:24.862 "name": "raid_bdev1", 00:28:24.862 "uuid": "9e0161ef-f565-4334-8444-925cfcb66f72", 00:28:24.862 "strip_size_kb": 0, 00:28:24.862 "state": "online", 00:28:24.862 "raid_level": "raid1", 00:28:24.862 "superblock": true, 00:28:24.862 "num_base_bdevs": 4, 00:28:24.862 "num_base_bdevs_discovered": 4, 00:28:24.862 "num_base_bdevs_operational": 4, 00:28:24.862 "base_bdevs_list": [ 00:28:24.862 { 00:28:24.862 "name": "pt1", 00:28:24.862 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:24.862 "is_configured": true, 00:28:24.862 "data_offset": 2048, 00:28:24.862 "data_size": 63488 00:28:24.862 }, 00:28:24.862 { 00:28:24.862 "name": "pt2", 00:28:24.862 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:24.862 "is_configured": true, 00:28:24.862 "data_offset": 2048, 00:28:24.862 "data_size": 63488 00:28:24.862 }, 00:28:24.862 { 00:28:24.862 "name": "pt3", 00:28:24.862 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:24.862 "is_configured": true, 00:28:24.862 "data_offset": 2048, 00:28:24.862 "data_size": 63488 00:28:24.862 }, 00:28:24.862 { 00:28:24.862 "name": "pt4", 00:28:24.862 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:24.862 "is_configured": true, 00:28:24.862 "data_offset": 2048, 00:28:24.862 "data_size": 63488 00:28:24.862 } 00:28:24.862 ] 00:28:24.862 }' 00:28:24.862 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:24.862 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.119 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:28:25.119 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:28:25.119 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:25.120 [2024-11-05 15:57:57.339700] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:25.120 "name": "raid_bdev1", 00:28:25.120 "aliases": [ 00:28:25.120 "9e0161ef-f565-4334-8444-925cfcb66f72" 00:28:25.120 ], 00:28:25.120 "product_name": "Raid Volume", 00:28:25.120 "block_size": 512, 00:28:25.120 "num_blocks": 63488, 00:28:25.120 "uuid": "9e0161ef-f565-4334-8444-925cfcb66f72", 00:28:25.120 "assigned_rate_limits": { 00:28:25.120 "rw_ios_per_sec": 0, 00:28:25.120 "rw_mbytes_per_sec": 0, 00:28:25.120 "r_mbytes_per_sec": 0, 00:28:25.120 "w_mbytes_per_sec": 0 00:28:25.120 }, 00:28:25.120 "claimed": false, 00:28:25.120 "zoned": false, 00:28:25.120 "supported_io_types": { 00:28:25.120 "read": true, 00:28:25.120 "write": true, 00:28:25.120 "unmap": false, 00:28:25.120 "flush": false, 00:28:25.120 "reset": true, 00:28:25.120 "nvme_admin": false, 00:28:25.120 "nvme_io": false, 00:28:25.120 "nvme_io_md": false, 00:28:25.120 "write_zeroes": true, 00:28:25.120 "zcopy": false, 00:28:25.120 "get_zone_info": false, 00:28:25.120 "zone_management": false, 00:28:25.120 "zone_append": false, 00:28:25.120 "compare": false, 00:28:25.120 "compare_and_write": false, 00:28:25.120 "abort": false, 00:28:25.120 "seek_hole": false, 00:28:25.120 "seek_data": false, 00:28:25.120 "copy": false, 00:28:25.120 "nvme_iov_md": false 00:28:25.120 }, 00:28:25.120 "memory_domains": [ 00:28:25.120 { 00:28:25.120 "dma_device_id": "system", 00:28:25.120 "dma_device_type": 1 00:28:25.120 }, 00:28:25.120 { 00:28:25.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:25.120 "dma_device_type": 2 00:28:25.120 }, 00:28:25.120 { 00:28:25.120 "dma_device_id": "system", 00:28:25.120 "dma_device_type": 1 00:28:25.120 }, 00:28:25.120 { 00:28:25.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:25.120 "dma_device_type": 2 00:28:25.120 }, 00:28:25.120 { 00:28:25.120 "dma_device_id": "system", 00:28:25.120 "dma_device_type": 1 00:28:25.120 }, 00:28:25.120 { 00:28:25.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:25.120 "dma_device_type": 2 00:28:25.120 }, 00:28:25.120 { 00:28:25.120 "dma_device_id": "system", 00:28:25.120 "dma_device_type": 1 00:28:25.120 }, 00:28:25.120 { 00:28:25.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:25.120 "dma_device_type": 2 00:28:25.120 } 00:28:25.120 ], 00:28:25.120 "driver_specific": { 00:28:25.120 "raid": { 00:28:25.120 "uuid": "9e0161ef-f565-4334-8444-925cfcb66f72", 00:28:25.120 "strip_size_kb": 0, 00:28:25.120 "state": "online", 00:28:25.120 "raid_level": "raid1", 00:28:25.120 "superblock": true, 00:28:25.120 "num_base_bdevs": 4, 00:28:25.120 "num_base_bdevs_discovered": 4, 00:28:25.120 "num_base_bdevs_operational": 4, 00:28:25.120 "base_bdevs_list": [ 00:28:25.120 { 00:28:25.120 "name": "pt1", 00:28:25.120 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:25.120 "is_configured": true, 00:28:25.120 "data_offset": 2048, 00:28:25.120 "data_size": 63488 00:28:25.120 }, 00:28:25.120 { 00:28:25.120 "name": "pt2", 00:28:25.120 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:25.120 "is_configured": true, 00:28:25.120 "data_offset": 2048, 00:28:25.120 "data_size": 63488 00:28:25.120 }, 00:28:25.120 { 00:28:25.120 "name": "pt3", 00:28:25.120 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:25.120 "is_configured": true, 00:28:25.120 "data_offset": 2048, 00:28:25.120 "data_size": 63488 00:28:25.120 }, 00:28:25.120 { 00:28:25.120 "name": "pt4", 00:28:25.120 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:25.120 "is_configured": true, 00:28:25.120 "data_offset": 2048, 00:28:25.120 "data_size": 63488 00:28:25.120 } 00:28:25.120 ] 00:28:25.120 } 00:28:25.120 } 00:28:25.120 }' 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:28:25.120 pt2 00:28:25.120 pt3 00:28:25.120 pt4' 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.120 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:28:25.378 [2024-11-05 15:57:57.555674] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9e0161ef-f565-4334-8444-925cfcb66f72 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9e0161ef-f565-4334-8444-925cfcb66f72 ']' 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.378 [2024-11-05 15:57:57.587343] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:25.378 [2024-11-05 15:57:57.587378] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:25.378 [2024-11-05 15:57:57.587462] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:25.378 [2024-11-05 15:57:57.587573] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:25.378 [2024-11-05 15:57:57.587597] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.378 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.378 [2024-11-05 15:57:57.695374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:28:25.378 [2024-11-05 15:57:57.697396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:28:25.378 [2024-11-05 15:57:57.697448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:28:25.378 [2024-11-05 15:57:57.697484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:28:25.378 [2024-11-05 15:57:57.697535] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:28:25.378 [2024-11-05 15:57:57.697584] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:28:25.378 [2024-11-05 15:57:57.697604] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:28:25.378 [2024-11-05 15:57:57.697624] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:28:25.378 [2024-11-05 15:57:57.697637] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:25.378 [2024-11-05 15:57:57.697649] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:28:25.378 request: 00:28:25.379 { 00:28:25.379 "name": "raid_bdev1", 00:28:25.379 "raid_level": "raid1", 00:28:25.379 "base_bdevs": [ 00:28:25.379 "malloc1", 00:28:25.379 "malloc2", 00:28:25.379 "malloc3", 00:28:25.379 "malloc4" 00:28:25.379 ], 00:28:25.379 "superblock": false, 00:28:25.379 "method": "bdev_raid_create", 00:28:25.379 "req_id": 1 00:28:25.379 } 00:28:25.379 Got JSON-RPC error response 00:28:25.379 response: 00:28:25.379 { 00:28:25.379 "code": -17, 00:28:25.379 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:28:25.379 } 00:28:25.379 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:25.379 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:28:25.379 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:25.379 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:25.379 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:25.379 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:25.379 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.379 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.379 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:28:25.379 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.379 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:28:25.379 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:28:25.379 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:25.379 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.379 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.379 [2024-11-05 15:57:57.739379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:25.379 [2024-11-05 15:57:57.739444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:25.379 [2024-11-05 15:57:57.739463] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:28:25.379 [2024-11-05 15:57:57.739475] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:25.379 [2024-11-05 15:57:57.741751] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:25.379 [2024-11-05 15:57:57.741789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:25.379 [2024-11-05 15:57:57.741881] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:28:25.379 [2024-11-05 15:57:57.741944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:25.379 pt1 00:28:25.379 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.379 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:28:25.379 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:25.379 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:25.379 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:25.379 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:25.379 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:25.379 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:25.379 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:25.379 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:25.379 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:25.379 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:25.379 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.379 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.379 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:25.379 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.379 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:25.379 "name": "raid_bdev1", 00:28:25.379 "uuid": "9e0161ef-f565-4334-8444-925cfcb66f72", 00:28:25.379 "strip_size_kb": 0, 00:28:25.379 "state": "configuring", 00:28:25.379 "raid_level": "raid1", 00:28:25.379 "superblock": true, 00:28:25.379 "num_base_bdevs": 4, 00:28:25.379 "num_base_bdevs_discovered": 1, 00:28:25.379 "num_base_bdevs_operational": 4, 00:28:25.379 "base_bdevs_list": [ 00:28:25.379 { 00:28:25.379 "name": "pt1", 00:28:25.379 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:25.379 "is_configured": true, 00:28:25.379 "data_offset": 2048, 00:28:25.379 "data_size": 63488 00:28:25.379 }, 00:28:25.379 { 00:28:25.379 "name": null, 00:28:25.379 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:25.379 "is_configured": false, 00:28:25.379 "data_offset": 2048, 00:28:25.379 "data_size": 63488 00:28:25.379 }, 00:28:25.379 { 00:28:25.379 "name": null, 00:28:25.379 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:25.379 "is_configured": false, 00:28:25.379 "data_offset": 2048, 00:28:25.379 "data_size": 63488 00:28:25.379 }, 00:28:25.379 { 00:28:25.379 "name": null, 00:28:25.379 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:25.379 "is_configured": false, 00:28:25.379 "data_offset": 2048, 00:28:25.379 "data_size": 63488 00:28:25.379 } 00:28:25.379 ] 00:28:25.379 }' 00:28:25.379 15:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:25.379 15:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.637 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:28:25.637 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:25.637 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.637 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.637 [2024-11-05 15:57:58.031500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:25.638 [2024-11-05 15:57:58.031584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:25.638 [2024-11-05 15:57:58.031605] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:28:25.638 [2024-11-05 15:57:58.031617] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:25.638 [2024-11-05 15:57:58.032088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:25.638 [2024-11-05 15:57:58.032117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:25.638 [2024-11-05 15:57:58.032197] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:25.638 [2024-11-05 15:57:58.032225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:25.638 pt2 00:28:25.638 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.638 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:28:25.638 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.638 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.638 [2024-11-05 15:57:58.039455] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:28:25.638 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.638 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:28:25.638 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:25.638 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:25.638 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:25.638 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:25.638 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:25.638 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:25.638 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:25.638 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:25.638 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:25.638 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:25.638 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.638 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.638 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:25.895 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.895 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:25.895 "name": "raid_bdev1", 00:28:25.895 "uuid": "9e0161ef-f565-4334-8444-925cfcb66f72", 00:28:25.895 "strip_size_kb": 0, 00:28:25.895 "state": "configuring", 00:28:25.895 "raid_level": "raid1", 00:28:25.895 "superblock": true, 00:28:25.895 "num_base_bdevs": 4, 00:28:25.895 "num_base_bdevs_discovered": 1, 00:28:25.895 "num_base_bdevs_operational": 4, 00:28:25.895 "base_bdevs_list": [ 00:28:25.895 { 00:28:25.895 "name": "pt1", 00:28:25.895 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:25.895 "is_configured": true, 00:28:25.895 "data_offset": 2048, 00:28:25.895 "data_size": 63488 00:28:25.895 }, 00:28:25.895 { 00:28:25.895 "name": null, 00:28:25.895 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:25.895 "is_configured": false, 00:28:25.895 "data_offset": 0, 00:28:25.895 "data_size": 63488 00:28:25.895 }, 00:28:25.895 { 00:28:25.895 "name": null, 00:28:25.895 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:25.895 "is_configured": false, 00:28:25.895 "data_offset": 2048, 00:28:25.895 "data_size": 63488 00:28:25.895 }, 00:28:25.895 { 00:28:25.895 "name": null, 00:28:25.895 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:25.895 "is_configured": false, 00:28:25.895 "data_offset": 2048, 00:28:25.895 "data_size": 63488 00:28:25.895 } 00:28:25.895 ] 00:28:25.895 }' 00:28:25.895 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:25.895 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.152 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:28:26.152 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:28:26.152 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:26.152 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.152 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.152 [2024-11-05 15:57:58.343564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:26.152 [2024-11-05 15:57:58.343640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:26.152 [2024-11-05 15:57:58.343667] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:28:26.152 [2024-11-05 15:57:58.343677] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:26.152 [2024-11-05 15:57:58.344152] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:26.152 [2024-11-05 15:57:58.344173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:26.152 [2024-11-05 15:57:58.344257] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:26.152 [2024-11-05 15:57:58.344279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:26.152 pt2 00:28:26.152 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.152 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:28:26.152 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:28:26.152 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:26.152 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.152 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.152 [2024-11-05 15:57:58.351516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:26.152 [2024-11-05 15:57:58.351562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:26.152 [2024-11-05 15:57:58.351581] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:28:26.152 [2024-11-05 15:57:58.351592] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:26.152 [2024-11-05 15:57:58.351987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:26.152 [2024-11-05 15:57:58.352011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:26.152 [2024-11-05 15:57:58.352072] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:28:26.152 [2024-11-05 15:57:58.352092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:26.152 pt3 00:28:26.152 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.152 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:28:26.152 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:28:26.152 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:28:26.152 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.152 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.152 [2024-11-05 15:57:58.359494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:28:26.152 [2024-11-05 15:57:58.359531] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:26.152 [2024-11-05 15:57:58.359547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:28:26.152 [2024-11-05 15:57:58.359555] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:26.152 [2024-11-05 15:57:58.359915] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:26.152 [2024-11-05 15:57:58.359938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:28:26.152 [2024-11-05 15:57:58.359995] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:28:26.152 [2024-11-05 15:57:58.360012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:28:26.152 [2024-11-05 15:57:58.360153] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:28:26.152 [2024-11-05 15:57:58.360168] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:26.152 [2024-11-05 15:57:58.360427] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:28:26.152 [2024-11-05 15:57:58.360568] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:28:26.152 [2024-11-05 15:57:58.360584] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:28:26.152 [2024-11-05 15:57:58.360706] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:26.152 pt4 00:28:26.152 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.152 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:28:26.152 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:28:26.152 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:28:26.152 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:26.152 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:26.152 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:26.152 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:26.152 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:26.152 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:26.152 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:26.152 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:26.152 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:26.152 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:26.152 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:26.152 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.152 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.152 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.152 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:26.152 "name": "raid_bdev1", 00:28:26.152 "uuid": "9e0161ef-f565-4334-8444-925cfcb66f72", 00:28:26.152 "strip_size_kb": 0, 00:28:26.152 "state": "online", 00:28:26.152 "raid_level": "raid1", 00:28:26.152 "superblock": true, 00:28:26.152 "num_base_bdevs": 4, 00:28:26.152 "num_base_bdevs_discovered": 4, 00:28:26.152 "num_base_bdevs_operational": 4, 00:28:26.152 "base_bdevs_list": [ 00:28:26.152 { 00:28:26.152 "name": "pt1", 00:28:26.152 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:26.152 "is_configured": true, 00:28:26.152 "data_offset": 2048, 00:28:26.152 "data_size": 63488 00:28:26.152 }, 00:28:26.153 { 00:28:26.153 "name": "pt2", 00:28:26.153 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:26.153 "is_configured": true, 00:28:26.153 "data_offset": 2048, 00:28:26.153 "data_size": 63488 00:28:26.153 }, 00:28:26.153 { 00:28:26.153 "name": "pt3", 00:28:26.153 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:26.153 "is_configured": true, 00:28:26.153 "data_offset": 2048, 00:28:26.153 "data_size": 63488 00:28:26.153 }, 00:28:26.153 { 00:28:26.153 "name": "pt4", 00:28:26.153 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:26.153 "is_configured": true, 00:28:26.153 "data_offset": 2048, 00:28:26.153 "data_size": 63488 00:28:26.153 } 00:28:26.153 ] 00:28:26.153 }' 00:28:26.153 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:26.153 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.409 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:28:26.409 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:28:26.409 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:26.409 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:26.409 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:28:26.409 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:26.409 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:26.409 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:26.409 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.409 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.409 [2024-11-05 15:57:58.668024] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:26.409 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.409 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:26.409 "name": "raid_bdev1", 00:28:26.409 "aliases": [ 00:28:26.409 "9e0161ef-f565-4334-8444-925cfcb66f72" 00:28:26.409 ], 00:28:26.409 "product_name": "Raid Volume", 00:28:26.409 "block_size": 512, 00:28:26.409 "num_blocks": 63488, 00:28:26.409 "uuid": "9e0161ef-f565-4334-8444-925cfcb66f72", 00:28:26.410 "assigned_rate_limits": { 00:28:26.410 "rw_ios_per_sec": 0, 00:28:26.410 "rw_mbytes_per_sec": 0, 00:28:26.410 "r_mbytes_per_sec": 0, 00:28:26.410 "w_mbytes_per_sec": 0 00:28:26.410 }, 00:28:26.410 "claimed": false, 00:28:26.410 "zoned": false, 00:28:26.410 "supported_io_types": { 00:28:26.410 "read": true, 00:28:26.410 "write": true, 00:28:26.410 "unmap": false, 00:28:26.410 "flush": false, 00:28:26.410 "reset": true, 00:28:26.410 "nvme_admin": false, 00:28:26.410 "nvme_io": false, 00:28:26.410 "nvme_io_md": false, 00:28:26.410 "write_zeroes": true, 00:28:26.410 "zcopy": false, 00:28:26.410 "get_zone_info": false, 00:28:26.410 "zone_management": false, 00:28:26.410 "zone_append": false, 00:28:26.410 "compare": false, 00:28:26.410 "compare_and_write": false, 00:28:26.410 "abort": false, 00:28:26.410 "seek_hole": false, 00:28:26.410 "seek_data": false, 00:28:26.410 "copy": false, 00:28:26.410 "nvme_iov_md": false 00:28:26.410 }, 00:28:26.410 "memory_domains": [ 00:28:26.410 { 00:28:26.410 "dma_device_id": "system", 00:28:26.410 "dma_device_type": 1 00:28:26.410 }, 00:28:26.410 { 00:28:26.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:26.410 "dma_device_type": 2 00:28:26.410 }, 00:28:26.410 { 00:28:26.410 "dma_device_id": "system", 00:28:26.410 "dma_device_type": 1 00:28:26.410 }, 00:28:26.410 { 00:28:26.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:26.410 "dma_device_type": 2 00:28:26.410 }, 00:28:26.410 { 00:28:26.410 "dma_device_id": "system", 00:28:26.410 "dma_device_type": 1 00:28:26.410 }, 00:28:26.410 { 00:28:26.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:26.410 "dma_device_type": 2 00:28:26.410 }, 00:28:26.410 { 00:28:26.410 "dma_device_id": "system", 00:28:26.410 "dma_device_type": 1 00:28:26.410 }, 00:28:26.410 { 00:28:26.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:26.410 "dma_device_type": 2 00:28:26.410 } 00:28:26.410 ], 00:28:26.410 "driver_specific": { 00:28:26.410 "raid": { 00:28:26.410 "uuid": "9e0161ef-f565-4334-8444-925cfcb66f72", 00:28:26.410 "strip_size_kb": 0, 00:28:26.410 "state": "online", 00:28:26.410 "raid_level": "raid1", 00:28:26.410 "superblock": true, 00:28:26.410 "num_base_bdevs": 4, 00:28:26.410 "num_base_bdevs_discovered": 4, 00:28:26.410 "num_base_bdevs_operational": 4, 00:28:26.410 "base_bdevs_list": [ 00:28:26.410 { 00:28:26.410 "name": "pt1", 00:28:26.410 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:26.410 "is_configured": true, 00:28:26.410 "data_offset": 2048, 00:28:26.410 "data_size": 63488 00:28:26.410 }, 00:28:26.410 { 00:28:26.410 "name": "pt2", 00:28:26.410 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:26.410 "is_configured": true, 00:28:26.410 "data_offset": 2048, 00:28:26.410 "data_size": 63488 00:28:26.410 }, 00:28:26.410 { 00:28:26.410 "name": "pt3", 00:28:26.410 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:26.410 "is_configured": true, 00:28:26.410 "data_offset": 2048, 00:28:26.410 "data_size": 63488 00:28:26.410 }, 00:28:26.410 { 00:28:26.410 "name": "pt4", 00:28:26.410 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:26.410 "is_configured": true, 00:28:26.410 "data_offset": 2048, 00:28:26.410 "data_size": 63488 00:28:26.410 } 00:28:26.410 ] 00:28:26.410 } 00:28:26.410 } 00:28:26.410 }' 00:28:26.410 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:26.410 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:28:26.410 pt2 00:28:26.410 pt3 00:28:26.410 pt4' 00:28:26.410 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:26.410 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:28:26.410 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:26.410 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:26.410 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:28:26.410 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.410 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.410 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.410 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:26.410 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:26.410 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:26.410 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:28:26.410 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:26.410 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.410 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.410 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.666 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:28:26.667 [2024-11-05 15:57:58.904047] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9e0161ef-f565-4334-8444-925cfcb66f72 '!=' 9e0161ef-f565-4334-8444-925cfcb66f72 ']' 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.667 [2024-11-05 15:57:58.935718] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:26.667 "name": "raid_bdev1", 00:28:26.667 "uuid": "9e0161ef-f565-4334-8444-925cfcb66f72", 00:28:26.667 "strip_size_kb": 0, 00:28:26.667 "state": "online", 00:28:26.667 "raid_level": "raid1", 00:28:26.667 "superblock": true, 00:28:26.667 "num_base_bdevs": 4, 00:28:26.667 "num_base_bdevs_discovered": 3, 00:28:26.667 "num_base_bdevs_operational": 3, 00:28:26.667 "base_bdevs_list": [ 00:28:26.667 { 00:28:26.667 "name": null, 00:28:26.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:26.667 "is_configured": false, 00:28:26.667 "data_offset": 0, 00:28:26.667 "data_size": 63488 00:28:26.667 }, 00:28:26.667 { 00:28:26.667 "name": "pt2", 00:28:26.667 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:26.667 "is_configured": true, 00:28:26.667 "data_offset": 2048, 00:28:26.667 "data_size": 63488 00:28:26.667 }, 00:28:26.667 { 00:28:26.667 "name": "pt3", 00:28:26.667 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:26.667 "is_configured": true, 00:28:26.667 "data_offset": 2048, 00:28:26.667 "data_size": 63488 00:28:26.667 }, 00:28:26.667 { 00:28:26.667 "name": "pt4", 00:28:26.667 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:26.667 "is_configured": true, 00:28:26.667 "data_offset": 2048, 00:28:26.667 "data_size": 63488 00:28:26.667 } 00:28:26.667 ] 00:28:26.667 }' 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:26.667 15:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.924 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:26.924 15:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.924 15:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.924 [2024-11-05 15:57:59.231743] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:26.924 [2024-11-05 15:57:59.231786] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:26.924 [2024-11-05 15:57:59.231887] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:26.924 [2024-11-05 15:57:59.231975] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:26.924 [2024-11-05 15:57:59.231992] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:28:26.924 15:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.924 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:28:26.924 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:26.924 15:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.924 15:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.924 15:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.924 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:28:26.924 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.925 [2024-11-05 15:57:59.295721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:26.925 [2024-11-05 15:57:59.295779] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:26.925 [2024-11-05 15:57:59.295797] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:28:26.925 [2024-11-05 15:57:59.295807] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:26.925 [2024-11-05 15:57:59.298140] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:26.925 [2024-11-05 15:57:59.298174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:26.925 [2024-11-05 15:57:59.298252] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:26.925 [2024-11-05 15:57:59.298297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:26.925 pt2 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:26.925 15:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.181 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:27.181 "name": "raid_bdev1", 00:28:27.181 "uuid": "9e0161ef-f565-4334-8444-925cfcb66f72", 00:28:27.181 "strip_size_kb": 0, 00:28:27.181 "state": "configuring", 00:28:27.181 "raid_level": "raid1", 00:28:27.181 "superblock": true, 00:28:27.181 "num_base_bdevs": 4, 00:28:27.181 "num_base_bdevs_discovered": 1, 00:28:27.181 "num_base_bdevs_operational": 3, 00:28:27.181 "base_bdevs_list": [ 00:28:27.181 { 00:28:27.181 "name": null, 00:28:27.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:27.181 "is_configured": false, 00:28:27.181 "data_offset": 2048, 00:28:27.181 "data_size": 63488 00:28:27.181 }, 00:28:27.181 { 00:28:27.181 "name": "pt2", 00:28:27.181 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:27.181 "is_configured": true, 00:28:27.181 "data_offset": 2048, 00:28:27.181 "data_size": 63488 00:28:27.181 }, 00:28:27.181 { 00:28:27.181 "name": null, 00:28:27.181 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:27.181 "is_configured": false, 00:28:27.181 "data_offset": 2048, 00:28:27.181 "data_size": 63488 00:28:27.181 }, 00:28:27.181 { 00:28:27.181 "name": null, 00:28:27.181 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:27.181 "is_configured": false, 00:28:27.181 "data_offset": 2048, 00:28:27.181 "data_size": 63488 00:28:27.181 } 00:28:27.181 ] 00:28:27.181 }' 00:28:27.181 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:27.181 15:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:27.442 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:28:27.442 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:28:27.442 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:27.442 15:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.442 15:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:27.442 [2024-11-05 15:57:59.603855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:27.442 [2024-11-05 15:57:59.603929] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:27.442 [2024-11-05 15:57:59.603952] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:28:27.442 [2024-11-05 15:57:59.603961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:27.442 [2024-11-05 15:57:59.604427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:27.442 [2024-11-05 15:57:59.604452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:27.442 [2024-11-05 15:57:59.604538] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:28:27.442 [2024-11-05 15:57:59.604566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:27.442 pt3 00:28:27.442 15:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.442 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:28:27.442 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:27.442 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:27.442 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:27.442 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:27.442 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:27.442 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:27.442 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:27.442 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:27.442 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:27.443 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:27.443 15:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.443 15:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:27.443 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:27.443 15:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.443 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:27.443 "name": "raid_bdev1", 00:28:27.443 "uuid": "9e0161ef-f565-4334-8444-925cfcb66f72", 00:28:27.443 "strip_size_kb": 0, 00:28:27.443 "state": "configuring", 00:28:27.443 "raid_level": "raid1", 00:28:27.443 "superblock": true, 00:28:27.443 "num_base_bdevs": 4, 00:28:27.443 "num_base_bdevs_discovered": 2, 00:28:27.443 "num_base_bdevs_operational": 3, 00:28:27.443 "base_bdevs_list": [ 00:28:27.443 { 00:28:27.443 "name": null, 00:28:27.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:27.443 "is_configured": false, 00:28:27.443 "data_offset": 2048, 00:28:27.443 "data_size": 63488 00:28:27.443 }, 00:28:27.443 { 00:28:27.443 "name": "pt2", 00:28:27.443 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:27.443 "is_configured": true, 00:28:27.443 "data_offset": 2048, 00:28:27.443 "data_size": 63488 00:28:27.443 }, 00:28:27.443 { 00:28:27.443 "name": "pt3", 00:28:27.443 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:27.443 "is_configured": true, 00:28:27.443 "data_offset": 2048, 00:28:27.443 "data_size": 63488 00:28:27.443 }, 00:28:27.443 { 00:28:27.443 "name": null, 00:28:27.443 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:27.443 "is_configured": false, 00:28:27.443 "data_offset": 2048, 00:28:27.443 "data_size": 63488 00:28:27.443 } 00:28:27.443 ] 00:28:27.443 }' 00:28:27.443 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:27.443 15:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:27.701 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:28:27.701 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:28:27.701 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:28:27.701 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:28:27.701 15:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.701 15:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:27.701 [2024-11-05 15:57:59.911940] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:28:27.701 [2024-11-05 15:57:59.912017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:27.701 [2024-11-05 15:57:59.912040] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:28:27.701 [2024-11-05 15:57:59.912050] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:27.701 [2024-11-05 15:57:59.912510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:27.701 [2024-11-05 15:57:59.912536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:28:27.701 [2024-11-05 15:57:59.912619] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:28:27.701 [2024-11-05 15:57:59.912645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:28:27.701 [2024-11-05 15:57:59.912779] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:28:27.701 [2024-11-05 15:57:59.912795] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:27.701 [2024-11-05 15:57:59.913057] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:28:27.701 [2024-11-05 15:57:59.913199] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:28:27.701 [2024-11-05 15:57:59.913225] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:28:27.701 [2024-11-05 15:57:59.913355] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:27.701 pt4 00:28:27.701 15:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.701 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:27.701 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:27.701 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:27.701 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:27.701 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:27.701 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:27.701 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:27.701 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:27.701 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:27.701 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:27.701 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:27.701 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:27.701 15:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.701 15:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:27.701 15:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.701 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:27.701 "name": "raid_bdev1", 00:28:27.701 "uuid": "9e0161ef-f565-4334-8444-925cfcb66f72", 00:28:27.701 "strip_size_kb": 0, 00:28:27.701 "state": "online", 00:28:27.701 "raid_level": "raid1", 00:28:27.701 "superblock": true, 00:28:27.701 "num_base_bdevs": 4, 00:28:27.701 "num_base_bdevs_discovered": 3, 00:28:27.701 "num_base_bdevs_operational": 3, 00:28:27.701 "base_bdevs_list": [ 00:28:27.701 { 00:28:27.701 "name": null, 00:28:27.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:27.701 "is_configured": false, 00:28:27.701 "data_offset": 2048, 00:28:27.701 "data_size": 63488 00:28:27.701 }, 00:28:27.701 { 00:28:27.701 "name": "pt2", 00:28:27.701 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:27.701 "is_configured": true, 00:28:27.701 "data_offset": 2048, 00:28:27.701 "data_size": 63488 00:28:27.701 }, 00:28:27.701 { 00:28:27.701 "name": "pt3", 00:28:27.701 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:27.701 "is_configured": true, 00:28:27.701 "data_offset": 2048, 00:28:27.701 "data_size": 63488 00:28:27.701 }, 00:28:27.701 { 00:28:27.701 "name": "pt4", 00:28:27.701 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:27.701 "is_configured": true, 00:28:27.701 "data_offset": 2048, 00:28:27.701 "data_size": 63488 00:28:27.701 } 00:28:27.701 ] 00:28:27.701 }' 00:28:27.701 15:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:27.701 15:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:27.959 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:27.959 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.959 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:27.959 [2024-11-05 15:58:00.211959] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:27.959 [2024-11-05 15:58:00.212000] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:27.959 [2024-11-05 15:58:00.212083] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:27.959 [2024-11-05 15:58:00.212163] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:27.959 [2024-11-05 15:58:00.212176] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:28:27.959 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.959 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:27.959 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:28:27.959 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.959 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:27.959 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.959 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:28:27.959 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:28:27.959 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:28:27.959 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:28:27.959 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:28:27.959 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.959 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:27.959 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.959 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:27.959 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.959 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:27.959 [2024-11-05 15:58:00.259938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:27.959 [2024-11-05 15:58:00.260003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:27.959 [2024-11-05 15:58:00.260019] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:28:27.959 [2024-11-05 15:58:00.260030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:27.959 [2024-11-05 15:58:00.262397] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:27.959 [2024-11-05 15:58:00.262434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:27.959 [2024-11-05 15:58:00.262533] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:28:27.959 [2024-11-05 15:58:00.262582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:27.959 [2024-11-05 15:58:00.262704] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:28:27.959 [2024-11-05 15:58:00.262724] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:27.959 [2024-11-05 15:58:00.262740] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:28:27.959 [2024-11-05 15:58:00.262802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:27.959 [2024-11-05 15:58:00.262925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:27.959 pt1 00:28:27.959 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.959 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:28:27.959 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:28:27.959 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:27.959 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:27.959 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:27.959 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:27.959 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:27.959 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:27.959 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:27.959 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:27.959 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:27.959 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:27.959 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:27.959 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.959 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:27.959 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.959 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:27.959 "name": "raid_bdev1", 00:28:27.959 "uuid": "9e0161ef-f565-4334-8444-925cfcb66f72", 00:28:27.959 "strip_size_kb": 0, 00:28:27.960 "state": "configuring", 00:28:27.960 "raid_level": "raid1", 00:28:27.960 "superblock": true, 00:28:27.960 "num_base_bdevs": 4, 00:28:27.960 "num_base_bdevs_discovered": 2, 00:28:27.960 "num_base_bdevs_operational": 3, 00:28:27.960 "base_bdevs_list": [ 00:28:27.960 { 00:28:27.960 "name": null, 00:28:27.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:27.960 "is_configured": false, 00:28:27.960 "data_offset": 2048, 00:28:27.960 "data_size": 63488 00:28:27.960 }, 00:28:27.960 { 00:28:27.960 "name": "pt2", 00:28:27.960 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:27.960 "is_configured": true, 00:28:27.960 "data_offset": 2048, 00:28:27.960 "data_size": 63488 00:28:27.960 }, 00:28:27.960 { 00:28:27.960 "name": "pt3", 00:28:27.960 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:27.960 "is_configured": true, 00:28:27.960 "data_offset": 2048, 00:28:27.960 "data_size": 63488 00:28:27.960 }, 00:28:27.960 { 00:28:27.960 "name": null, 00:28:27.960 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:27.960 "is_configured": false, 00:28:27.960 "data_offset": 2048, 00:28:27.960 "data_size": 63488 00:28:27.960 } 00:28:27.960 ] 00:28:27.960 }' 00:28:27.960 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:27.960 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.217 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:28:28.217 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:28:28.217 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.217 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.217 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.217 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:28:28.217 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:28:28.217 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.217 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.217 [2024-11-05 15:58:00.600048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:28:28.217 [2024-11-05 15:58:00.600122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:28.217 [2024-11-05 15:58:00.600146] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:28:28.217 [2024-11-05 15:58:00.600156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:28.217 [2024-11-05 15:58:00.600602] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:28.217 [2024-11-05 15:58:00.600616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:28:28.217 [2024-11-05 15:58:00.600699] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:28:28.217 [2024-11-05 15:58:00.600724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:28:28.217 [2024-11-05 15:58:00.600872] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:28:28.217 [2024-11-05 15:58:00.600882] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:28.217 [2024-11-05 15:58:00.601136] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:28:28.217 [2024-11-05 15:58:00.601270] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:28:28.217 [2024-11-05 15:58:00.601281] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:28:28.217 [2024-11-05 15:58:00.601415] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:28.217 pt4 00:28:28.217 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.217 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:28.217 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:28.217 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:28.217 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:28.217 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:28.217 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:28.217 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:28.217 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:28.217 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:28.217 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:28.217 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:28.217 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.217 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.217 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:28.217 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.474 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:28.474 "name": "raid_bdev1", 00:28:28.474 "uuid": "9e0161ef-f565-4334-8444-925cfcb66f72", 00:28:28.474 "strip_size_kb": 0, 00:28:28.474 "state": "online", 00:28:28.474 "raid_level": "raid1", 00:28:28.474 "superblock": true, 00:28:28.474 "num_base_bdevs": 4, 00:28:28.474 "num_base_bdevs_discovered": 3, 00:28:28.474 "num_base_bdevs_operational": 3, 00:28:28.474 "base_bdevs_list": [ 00:28:28.474 { 00:28:28.474 "name": null, 00:28:28.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:28.474 "is_configured": false, 00:28:28.474 "data_offset": 2048, 00:28:28.474 "data_size": 63488 00:28:28.474 }, 00:28:28.474 { 00:28:28.474 "name": "pt2", 00:28:28.474 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:28.474 "is_configured": true, 00:28:28.474 "data_offset": 2048, 00:28:28.474 "data_size": 63488 00:28:28.474 }, 00:28:28.474 { 00:28:28.474 "name": "pt3", 00:28:28.474 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:28.474 "is_configured": true, 00:28:28.474 "data_offset": 2048, 00:28:28.474 "data_size": 63488 00:28:28.474 }, 00:28:28.474 { 00:28:28.474 "name": "pt4", 00:28:28.474 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:28.474 "is_configured": true, 00:28:28.474 "data_offset": 2048, 00:28:28.474 "data_size": 63488 00:28:28.474 } 00:28:28.474 ] 00:28:28.474 }' 00:28:28.474 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:28.474 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.730 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:28:28.730 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:28:28.730 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.730 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.730 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.730 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:28:28.730 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:28:28.730 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:28.730 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.730 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.730 [2024-11-05 15:58:00.932376] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:28.730 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.730 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 9e0161ef-f565-4334-8444-925cfcb66f72 '!=' 9e0161ef-f565-4334-8444-925cfcb66f72 ']' 00:28:28.730 15:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72307 00:28:28.730 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 72307 ']' 00:28:28.730 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 72307 00:28:28.730 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:28:28.730 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:28.730 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72307 00:28:28.730 killing process with pid 72307 00:28:28.730 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:28.730 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:28.730 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72307' 00:28:28.730 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 72307 00:28:28.730 [2024-11-05 15:58:00.973415] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:28.730 15:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 72307 00:28:28.730 [2024-11-05 15:58:00.973511] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:28.730 [2024-11-05 15:58:00.973584] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:28.730 [2024-11-05 15:58:00.973595] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:28:28.987 [2024-11-05 15:58:01.178050] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:29.551 15:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:28:29.551 00:28:29.551 real 0m5.817s 00:28:29.551 user 0m9.245s 00:28:29.551 sys 0m0.928s 00:28:29.551 15:58:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:29.551 15:58:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:29.551 ************************************ 00:28:29.551 END TEST raid_superblock_test 00:28:29.551 ************************************ 00:28:29.551 15:58:01 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:28:29.551 15:58:01 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:28:29.551 15:58:01 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:29.551 15:58:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:29.551 ************************************ 00:28:29.551 START TEST raid_read_error_test 00:28:29.551 ************************************ 00:28:29.551 15:58:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 read 00:28:29.551 15:58:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:28:29.551 15:58:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:28:29.551 15:58:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:28:29.551 15:58:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:28:29.551 15:58:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:29.551 15:58:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:28:29.551 15:58:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:29.551 15:58:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:29.551 15:58:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:28:29.551 15:58:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:29.551 15:58:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:29.551 15:58:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:28:29.551 15:58:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:29.551 15:58:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:29.551 15:58:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:28:29.551 15:58:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:29.551 15:58:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:29.551 15:58:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:29.551 15:58:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:28:29.551 15:58:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:28:29.551 15:58:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:28:29.551 15:58:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:28:29.551 15:58:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:28:29.551 15:58:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:28:29.551 15:58:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:28:29.551 15:58:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:28:29.551 15:58:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:28:29.551 15:58:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.CyK44W9h9X 00:28:29.551 15:58:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72767 00:28:29.551 15:58:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72767 00:28:29.551 15:58:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 72767 ']' 00:28:29.551 15:58:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:29.551 15:58:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:29.551 15:58:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:29.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:29.551 15:58:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:29.551 15:58:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:28:29.551 15:58:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:29.551 [2024-11-05 15:58:01.896933] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:28:29.551 [2024-11-05 15:58:01.897056] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72767 ] 00:28:29.808 [2024-11-05 15:58:02.050738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.808 [2024-11-05 15:58:02.150427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.066 [2024-11-05 15:58:02.275537] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:30.066 [2024-11-05 15:58:02.275584] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:30.324 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:30.324 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:28:30.324 15:58:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:30.324 15:58:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:30.324 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.324 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:30.324 BaseBdev1_malloc 00:28:30.324 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.324 15:58:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:28:30.324 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.324 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:30.324 true 00:28:30.324 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.324 15:58:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:28:30.324 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.324 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:30.324 [2024-11-05 15:58:02.711571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:28:30.324 [2024-11-05 15:58:02.711625] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:30.324 [2024-11-05 15:58:02.711643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:28:30.324 [2024-11-05 15:58:02.711653] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:30.324 [2024-11-05 15:58:02.713743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:30.324 [2024-11-05 15:58:02.713782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:30.324 BaseBdev1 00:28:30.324 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.324 15:58:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:30.324 15:58:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:30.324 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.324 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:30.582 BaseBdev2_malloc 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:30.582 true 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:30.582 [2024-11-05 15:58:02.755188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:28:30.582 [2024-11-05 15:58:02.755233] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:30.582 [2024-11-05 15:58:02.755248] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:28:30.582 [2024-11-05 15:58:02.755258] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:30.582 [2024-11-05 15:58:02.757317] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:30.582 [2024-11-05 15:58:02.757352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:30.582 BaseBdev2 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:30.582 BaseBdev3_malloc 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:30.582 true 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:30.582 [2024-11-05 15:58:02.807317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:28:30.582 [2024-11-05 15:58:02.807365] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:30.582 [2024-11-05 15:58:02.807381] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:28:30.582 [2024-11-05 15:58:02.807392] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:30.582 [2024-11-05 15:58:02.809488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:30.582 [2024-11-05 15:58:02.809524] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:28:30.582 BaseBdev3 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:30.582 BaseBdev4_malloc 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:30.582 true 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:30.582 [2024-11-05 15:58:02.850877] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:28:30.582 [2024-11-05 15:58:02.850921] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:30.582 [2024-11-05 15:58:02.850937] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:28:30.582 [2024-11-05 15:58:02.850947] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:30.582 [2024-11-05 15:58:02.853003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:30.582 [2024-11-05 15:58:02.853039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:28:30.582 BaseBdev4 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:30.582 [2024-11-05 15:58:02.858942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:30.582 [2024-11-05 15:58:02.860749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:30.582 [2024-11-05 15:58:02.860826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:30.582 [2024-11-05 15:58:02.860903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:30.582 [2024-11-05 15:58:02.861124] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:28:30.582 [2024-11-05 15:58:02.861143] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:30.582 [2024-11-05 15:58:02.861379] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:28:30.582 [2024-11-05 15:58:02.861532] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:28:30.582 [2024-11-05 15:58:02.861541] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:28:30.582 [2024-11-05 15:58:02.861679] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.582 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:30.583 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.583 15:58:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:30.583 "name": "raid_bdev1", 00:28:30.583 "uuid": "cdef249d-6b7e-4c24-9485-a06630001d2d", 00:28:30.583 "strip_size_kb": 0, 00:28:30.583 "state": "online", 00:28:30.583 "raid_level": "raid1", 00:28:30.583 "superblock": true, 00:28:30.583 "num_base_bdevs": 4, 00:28:30.583 "num_base_bdevs_discovered": 4, 00:28:30.583 "num_base_bdevs_operational": 4, 00:28:30.583 "base_bdevs_list": [ 00:28:30.583 { 00:28:30.583 "name": "BaseBdev1", 00:28:30.583 "uuid": "f51e928f-c547-54bc-813f-a972d7dd400f", 00:28:30.583 "is_configured": true, 00:28:30.583 "data_offset": 2048, 00:28:30.583 "data_size": 63488 00:28:30.583 }, 00:28:30.583 { 00:28:30.583 "name": "BaseBdev2", 00:28:30.583 "uuid": "f61b5f36-c3ea-5a27-8c45-f63019b9ede7", 00:28:30.583 "is_configured": true, 00:28:30.583 "data_offset": 2048, 00:28:30.583 "data_size": 63488 00:28:30.583 }, 00:28:30.583 { 00:28:30.583 "name": "BaseBdev3", 00:28:30.583 "uuid": "1f5bca43-547b-55bb-939b-c8b504917d7c", 00:28:30.583 "is_configured": true, 00:28:30.583 "data_offset": 2048, 00:28:30.583 "data_size": 63488 00:28:30.583 }, 00:28:30.583 { 00:28:30.583 "name": "BaseBdev4", 00:28:30.583 "uuid": "c7cd2ce2-d8ff-55db-846e-85eb64f3a3ab", 00:28:30.583 "is_configured": true, 00:28:30.583 "data_offset": 2048, 00:28:30.583 "data_size": 63488 00:28:30.583 } 00:28:30.583 ] 00:28:30.583 }' 00:28:30.583 15:58:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:30.583 15:58:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:30.841 15:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:28:30.841 15:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:28:31.098 [2024-11-05 15:58:03.264014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:28:32.029 15:58:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:28:32.029 15:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.029 15:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.029 15:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.029 15:58:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:28:32.029 15:58:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:28:32.029 15:58:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:28:32.029 15:58:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:28:32.029 15:58:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:28:32.029 15:58:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:32.029 15:58:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:32.029 15:58:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:32.029 15:58:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:32.029 15:58:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:32.029 15:58:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:32.029 15:58:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:32.029 15:58:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:32.029 15:58:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:32.029 15:58:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:32.029 15:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.029 15:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.029 15:58:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:32.029 15:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.029 15:58:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:32.029 "name": "raid_bdev1", 00:28:32.029 "uuid": "cdef249d-6b7e-4c24-9485-a06630001d2d", 00:28:32.029 "strip_size_kb": 0, 00:28:32.029 "state": "online", 00:28:32.029 "raid_level": "raid1", 00:28:32.029 "superblock": true, 00:28:32.029 "num_base_bdevs": 4, 00:28:32.029 "num_base_bdevs_discovered": 4, 00:28:32.029 "num_base_bdevs_operational": 4, 00:28:32.029 "base_bdevs_list": [ 00:28:32.029 { 00:28:32.029 "name": "BaseBdev1", 00:28:32.029 "uuid": "f51e928f-c547-54bc-813f-a972d7dd400f", 00:28:32.029 "is_configured": true, 00:28:32.029 "data_offset": 2048, 00:28:32.029 "data_size": 63488 00:28:32.029 }, 00:28:32.029 { 00:28:32.029 "name": "BaseBdev2", 00:28:32.029 "uuid": "f61b5f36-c3ea-5a27-8c45-f63019b9ede7", 00:28:32.029 "is_configured": true, 00:28:32.029 "data_offset": 2048, 00:28:32.029 "data_size": 63488 00:28:32.029 }, 00:28:32.029 { 00:28:32.029 "name": "BaseBdev3", 00:28:32.029 "uuid": "1f5bca43-547b-55bb-939b-c8b504917d7c", 00:28:32.029 "is_configured": true, 00:28:32.029 "data_offset": 2048, 00:28:32.029 "data_size": 63488 00:28:32.029 }, 00:28:32.029 { 00:28:32.029 "name": "BaseBdev4", 00:28:32.029 "uuid": "c7cd2ce2-d8ff-55db-846e-85eb64f3a3ab", 00:28:32.029 "is_configured": true, 00:28:32.029 "data_offset": 2048, 00:28:32.029 "data_size": 63488 00:28:32.029 } 00:28:32.029 ] 00:28:32.029 }' 00:28:32.029 15:58:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:32.029 15:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.287 15:58:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:32.287 15:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.287 15:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.287 [2024-11-05 15:58:04.466640] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:32.287 [2024-11-05 15:58:04.466675] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:32.287 [2024-11-05 15:58:04.469647] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:32.287 [2024-11-05 15:58:04.469704] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:32.287 [2024-11-05 15:58:04.469825] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:32.287 [2024-11-05 15:58:04.469853] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:28:32.287 { 00:28:32.287 "results": [ 00:28:32.287 { 00:28:32.287 "job": "raid_bdev1", 00:28:32.287 "core_mask": "0x1", 00:28:32.287 "workload": "randrw", 00:28:32.287 "percentage": 50, 00:28:32.287 "status": "finished", 00:28:32.287 "queue_depth": 1, 00:28:32.287 "io_size": 131072, 00:28:32.287 "runtime": 1.200742, 00:28:32.287 "iops": 11641.135231381928, 00:28:32.287 "mibps": 1455.141903922741, 00:28:32.287 "io_failed": 0, 00:28:32.287 "io_timeout": 0, 00:28:32.287 "avg_latency_us": 82.77040536227257, 00:28:32.287 "min_latency_us": 29.341538461538462, 00:28:32.287 "max_latency_us": 1751.8276923076924 00:28:32.287 } 00:28:32.287 ], 00:28:32.287 "core_count": 1 00:28:32.287 } 00:28:32.287 15:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.287 15:58:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72767 00:28:32.287 15:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 72767 ']' 00:28:32.287 15:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 72767 00:28:32.287 15:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:28:32.287 15:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:32.287 15:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72767 00:28:32.287 killing process with pid 72767 00:28:32.287 15:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:32.287 15:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:32.287 15:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72767' 00:28:32.287 15:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 72767 00:28:32.287 [2024-11-05 15:58:04.494837] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:32.287 15:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 72767 00:28:32.287 [2024-11-05 15:58:04.692979] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:33.217 15:58:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:28:33.217 15:58:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:28:33.217 15:58:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.CyK44W9h9X 00:28:33.217 15:58:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:28:33.217 15:58:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:28:33.217 15:58:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:28:33.217 15:58:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:28:33.217 15:58:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:28:33.217 00:28:33.217 real 0m3.602s 00:28:33.217 user 0m4.191s 00:28:33.217 sys 0m0.408s 00:28:33.217 15:58:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:33.217 15:58:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:33.217 ************************************ 00:28:33.217 END TEST raid_read_error_test 00:28:33.217 ************************************ 00:28:33.217 15:58:05 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:28:33.218 15:58:05 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:28:33.218 15:58:05 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:33.218 15:58:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:33.218 ************************************ 00:28:33.218 START TEST raid_write_error_test 00:28:33.218 ************************************ 00:28:33.218 15:58:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 write 00:28:33.218 15:58:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:28:33.218 15:58:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:28:33.218 15:58:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:28:33.218 15:58:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:28:33.218 15:58:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:33.218 15:58:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:28:33.218 15:58:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:33.218 15:58:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:33.218 15:58:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:28:33.218 15:58:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:33.218 15:58:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:33.218 15:58:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:28:33.218 15:58:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:33.218 15:58:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:33.218 15:58:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:28:33.218 15:58:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:33.218 15:58:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:33.218 15:58:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:33.218 15:58:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:28:33.218 15:58:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:28:33.218 15:58:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:28:33.218 15:58:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:28:33.218 15:58:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:28:33.218 15:58:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:28:33.218 15:58:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:28:33.218 15:58:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:28:33.218 15:58:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:28:33.218 15:58:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6DEu4le4rX 00:28:33.218 15:58:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72902 00:28:33.218 15:58:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:28:33.218 15:58:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72902 00:28:33.218 15:58:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 72902 ']' 00:28:33.218 15:58:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:33.218 15:58:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:33.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:33.218 15:58:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:33.218 15:58:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:33.218 15:58:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:33.218 [2024-11-05 15:58:05.549879] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:28:33.218 [2024-11-05 15:58:05.550417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72902 ] 00:28:33.475 [2024-11-05 15:58:05.718553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.475 [2024-11-05 15:58:05.814834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:33.732 [2024-11-05 15:58:05.949163] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:33.732 [2024-11-05 15:58:05.949196] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:33.989 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:33.989 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:28:33.989 15:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:33.989 15:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:33.989 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.989 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:33.989 BaseBdev1_malloc 00:28:33.989 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.989 15:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:28:33.989 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.989 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:33.989 true 00:28:33.989 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.989 15:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:28:33.989 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.989 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:33.989 [2024-11-05 15:58:06.400773] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:28:33.989 [2024-11-05 15:58:06.400852] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:33.989 [2024-11-05 15:58:06.400880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:28:33.989 [2024-11-05 15:58:06.400896] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:33.989 [2024-11-05 15:58:06.403107] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:33.989 [2024-11-05 15:58:06.403148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:33.989 BaseBdev1 00:28:34.246 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.246 15:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:34.246 15:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:34.246 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.246 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:34.246 BaseBdev2_malloc 00:28:34.246 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.246 15:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:28:34.246 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.246 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:34.246 true 00:28:34.246 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:34.247 [2024-11-05 15:58:06.444655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:28:34.247 [2024-11-05 15:58:06.444713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:34.247 [2024-11-05 15:58:06.444736] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:28:34.247 [2024-11-05 15:58:06.444751] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:34.247 [2024-11-05 15:58:06.446929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:34.247 [2024-11-05 15:58:06.446969] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:34.247 BaseBdev2 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:34.247 BaseBdev3_malloc 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:34.247 true 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:34.247 [2024-11-05 15:58:06.501449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:28:34.247 [2024-11-05 15:58:06.501501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:34.247 [2024-11-05 15:58:06.501525] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:28:34.247 [2024-11-05 15:58:06.501540] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:34.247 [2024-11-05 15:58:06.503704] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:34.247 [2024-11-05 15:58:06.503748] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:28:34.247 BaseBdev3 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:34.247 BaseBdev4_malloc 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:34.247 true 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:34.247 [2024-11-05 15:58:06.545395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:28:34.247 [2024-11-05 15:58:06.545454] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:34.247 [2024-11-05 15:58:06.545479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:28:34.247 [2024-11-05 15:58:06.545495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:34.247 [2024-11-05 15:58:06.547653] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:34.247 [2024-11-05 15:58:06.547700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:28:34.247 BaseBdev4 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:34.247 [2024-11-05 15:58:06.553465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:34.247 [2024-11-05 15:58:06.555382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:34.247 [2024-11-05 15:58:06.555489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:34.247 [2024-11-05 15:58:06.555579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:34.247 [2024-11-05 15:58:06.555835] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:28:34.247 [2024-11-05 15:58:06.555874] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:34.247 [2024-11-05 15:58:06.556154] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:28:34.247 [2024-11-05 15:58:06.556343] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:28:34.247 [2024-11-05 15:58:06.556362] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:28:34.247 [2024-11-05 15:58:06.556544] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:34.247 "name": "raid_bdev1", 00:28:34.247 "uuid": "ffa24cca-aa0b-48fd-93ec-dc60295b7db9", 00:28:34.247 "strip_size_kb": 0, 00:28:34.247 "state": "online", 00:28:34.247 "raid_level": "raid1", 00:28:34.247 "superblock": true, 00:28:34.247 "num_base_bdevs": 4, 00:28:34.247 "num_base_bdevs_discovered": 4, 00:28:34.247 "num_base_bdevs_operational": 4, 00:28:34.247 "base_bdevs_list": [ 00:28:34.247 { 00:28:34.247 "name": "BaseBdev1", 00:28:34.247 "uuid": "61ef42e0-2709-53dc-828c-738838db7938", 00:28:34.247 "is_configured": true, 00:28:34.247 "data_offset": 2048, 00:28:34.247 "data_size": 63488 00:28:34.247 }, 00:28:34.247 { 00:28:34.247 "name": "BaseBdev2", 00:28:34.247 "uuid": "4764a83a-9c31-5b62-a428-642b5093848a", 00:28:34.247 "is_configured": true, 00:28:34.247 "data_offset": 2048, 00:28:34.247 "data_size": 63488 00:28:34.247 }, 00:28:34.247 { 00:28:34.247 "name": "BaseBdev3", 00:28:34.247 "uuid": "9e9f6328-0805-529a-a994-d32caf340bf3", 00:28:34.247 "is_configured": true, 00:28:34.247 "data_offset": 2048, 00:28:34.247 "data_size": 63488 00:28:34.247 }, 00:28:34.247 { 00:28:34.247 "name": "BaseBdev4", 00:28:34.247 "uuid": "27043ed2-96cd-5ca1-a77d-c95c70ef2d40", 00:28:34.247 "is_configured": true, 00:28:34.247 "data_offset": 2048, 00:28:34.247 "data_size": 63488 00:28:34.247 } 00:28:34.247 ] 00:28:34.247 }' 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:34.247 15:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:34.504 15:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:28:34.504 15:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:28:34.761 [2024-11-05 15:58:06.962495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:28:35.692 15:58:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:28:35.692 15:58:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.692 15:58:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:35.692 [2024-11-05 15:58:07.878235] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:28:35.692 [2024-11-05 15:58:07.878300] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:35.692 [2024-11-05 15:58:07.878574] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:28:35.692 15:58:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.692 15:58:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:28:35.692 15:58:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:28:35.692 15:58:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:28:35.692 15:58:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:28:35.692 15:58:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:35.692 15:58:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:35.692 15:58:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:35.692 15:58:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:35.692 15:58:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:35.692 15:58:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:35.692 15:58:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:35.692 15:58:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:35.692 15:58:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:35.692 15:58:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:35.692 15:58:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:35.692 15:58:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.692 15:58:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:35.692 15:58:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:35.692 15:58:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.692 15:58:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:35.692 "name": "raid_bdev1", 00:28:35.692 "uuid": "ffa24cca-aa0b-48fd-93ec-dc60295b7db9", 00:28:35.692 "strip_size_kb": 0, 00:28:35.692 "state": "online", 00:28:35.692 "raid_level": "raid1", 00:28:35.692 "superblock": true, 00:28:35.692 "num_base_bdevs": 4, 00:28:35.692 "num_base_bdevs_discovered": 3, 00:28:35.692 "num_base_bdevs_operational": 3, 00:28:35.692 "base_bdevs_list": [ 00:28:35.692 { 00:28:35.692 "name": null, 00:28:35.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:35.692 "is_configured": false, 00:28:35.692 "data_offset": 0, 00:28:35.692 "data_size": 63488 00:28:35.692 }, 00:28:35.692 { 00:28:35.692 "name": "BaseBdev2", 00:28:35.692 "uuid": "4764a83a-9c31-5b62-a428-642b5093848a", 00:28:35.692 "is_configured": true, 00:28:35.692 "data_offset": 2048, 00:28:35.692 "data_size": 63488 00:28:35.692 }, 00:28:35.692 { 00:28:35.692 "name": "BaseBdev3", 00:28:35.692 "uuid": "9e9f6328-0805-529a-a994-d32caf340bf3", 00:28:35.692 "is_configured": true, 00:28:35.692 "data_offset": 2048, 00:28:35.692 "data_size": 63488 00:28:35.692 }, 00:28:35.692 { 00:28:35.692 "name": "BaseBdev4", 00:28:35.692 "uuid": "27043ed2-96cd-5ca1-a77d-c95c70ef2d40", 00:28:35.692 "is_configured": true, 00:28:35.692 "data_offset": 2048, 00:28:35.692 "data_size": 63488 00:28:35.692 } 00:28:35.692 ] 00:28:35.692 }' 00:28:35.692 15:58:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:35.692 15:58:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:35.953 15:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:35.953 15:58:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.953 15:58:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:35.953 [2024-11-05 15:58:08.242320] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:35.953 [2024-11-05 15:58:08.242355] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:35.953 [2024-11-05 15:58:08.245397] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:35.953 [2024-11-05 15:58:08.245452] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:35.953 [2024-11-05 15:58:08.245606] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:35.953 [2024-11-05 15:58:08.245628] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:28:35.953 15:58:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.953 15:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72902 00:28:35.953 15:58:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 72902 ']' 00:28:35.953 15:58:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 72902 00:28:35.953 { 00:28:35.953 "results": [ 00:28:35.953 { 00:28:35.953 "job": "raid_bdev1", 00:28:35.953 "core_mask": "0x1", 00:28:35.953 "workload": "randrw", 00:28:35.953 "percentage": 50, 00:28:35.953 "status": "finished", 00:28:35.953 "queue_depth": 1, 00:28:35.953 "io_size": 131072, 00:28:35.953 "runtime": 1.277888, 00:28:35.953 "iops": 12027.658135924275, 00:28:35.953 "mibps": 1503.4572669905344, 00:28:35.953 "io_failed": 0, 00:28:35.953 "io_timeout": 0, 00:28:35.953 "avg_latency_us": 79.87980261248187, 00:28:35.953 "min_latency_us": 29.53846153846154, 00:28:35.953 "max_latency_us": 1726.6215384615384 00:28:35.953 } 00:28:35.953 ], 00:28:35.953 "core_count": 1 00:28:35.953 } 00:28:35.953 15:58:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:28:35.953 15:58:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:35.953 15:58:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72902 00:28:35.953 15:58:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:35.953 15:58:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:35.953 killing process with pid 72902 00:28:35.953 15:58:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72902' 00:28:35.953 15:58:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 72902 00:28:35.953 15:58:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 72902 00:28:35.953 [2024-11-05 15:58:08.271023] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:36.210 [2024-11-05 15:58:08.468604] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:36.775 15:58:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6DEu4le4rX 00:28:36.775 15:58:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:28:36.775 15:58:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:28:36.775 15:58:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:28:36.775 15:58:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:28:36.775 15:58:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:28:36.775 15:58:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:28:36.775 15:58:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:28:36.775 00:28:36.775 real 0m3.670s 00:28:36.775 user 0m4.351s 00:28:36.775 sys 0m0.429s 00:28:36.775 15:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:36.775 15:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:36.775 ************************************ 00:28:36.775 END TEST raid_write_error_test 00:28:36.775 ************************************ 00:28:36.775 15:58:09 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:28:36.775 15:58:09 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:28:36.775 15:58:09 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:28:36.775 15:58:09 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:28:36.775 15:58:09 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:36.775 15:58:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:36.775 ************************************ 00:28:36.775 START TEST raid_rebuild_test 00:28:36.775 ************************************ 00:28:36.775 15:58:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false false true 00:28:36.775 15:58:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:28:36.775 15:58:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:28:36.775 15:58:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:28:36.775 15:58:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:28:36.775 15:58:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:28:36.775 15:58:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:28:36.775 15:58:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:36.775 15:58:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:28:36.775 15:58:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:36.775 15:58:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:36.775 15:58:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:28:36.775 15:58:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:36.775 15:58:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:36.775 15:58:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:36.775 15:58:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:28:36.775 15:58:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:28:36.775 15:58:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:28:36.775 15:58:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:28:36.775 15:58:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:28:36.775 15:58:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:28:36.775 15:58:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:28:36.775 15:58:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:28:36.775 15:58:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:28:36.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:36.775 15:58:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=73034 00:28:36.775 15:58:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 73034 00:28:36.775 15:58:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 73034 ']' 00:28:36.775 15:58:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:36.775 15:58:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:36.775 15:58:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:36.775 15:58:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:36.775 15:58:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:36.775 15:58:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:37.032 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:37.032 Zero copy mechanism will not be used. 00:28:37.032 [2024-11-05 15:58:09.233371] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:28:37.032 [2024-11-05 15:58:09.233486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73034 ] 00:28:37.032 [2024-11-05 15:58:09.388926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.288 [2024-11-05 15:58:09.473561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.288 [2024-11-05 15:58:09.583962] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:37.288 [2024-11-05 15:58:09.584003] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:37.853 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:37.853 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:28:37.853 15:58:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:37.853 15:58:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:37.853 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.853 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:37.853 BaseBdev1_malloc 00:28:37.853 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.853 15:58:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:37.853 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.853 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:37.853 [2024-11-05 15:58:10.174850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:37.853 [2024-11-05 15:58:10.174902] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:37.853 [2024-11-05 15:58:10.174920] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:28:37.853 [2024-11-05 15:58:10.174929] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:37.853 [2024-11-05 15:58:10.176681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:37.853 [2024-11-05 15:58:10.176715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:37.853 BaseBdev1 00:28:37.853 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.853 15:58:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:37.853 15:58:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:37.853 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.853 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:37.853 BaseBdev2_malloc 00:28:37.853 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.853 15:58:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:37.853 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.853 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:37.853 [2024-11-05 15:58:10.206326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:37.853 [2024-11-05 15:58:10.206374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:37.853 [2024-11-05 15:58:10.206389] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:28:37.853 [2024-11-05 15:58:10.206398] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:37.853 [2024-11-05 15:58:10.208164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:37.853 [2024-11-05 15:58:10.208194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:37.853 BaseBdev2 00:28:37.853 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.853 15:58:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:28:37.853 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.853 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:37.853 spare_malloc 00:28:37.853 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.853 15:58:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:37.853 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.853 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:37.853 spare_delay 00:28:37.853 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.853 15:58:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:28:37.853 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.853 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:37.853 [2024-11-05 15:58:10.262361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:37.853 [2024-11-05 15:58:10.262412] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:37.853 [2024-11-05 15:58:10.262425] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:28:37.853 [2024-11-05 15:58:10.262434] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:37.853 [2024-11-05 15:58:10.264172] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:37.853 [2024-11-05 15:58:10.264204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:37.853 spare 00:28:37.853 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.853 15:58:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:28:37.853 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.853 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:38.147 [2024-11-05 15:58:10.270405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:38.147 [2024-11-05 15:58:10.271908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:38.147 [2024-11-05 15:58:10.271979] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:28:38.147 [2024-11-05 15:58:10.271990] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:28:38.147 [2024-11-05 15:58:10.272196] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:28:38.147 [2024-11-05 15:58:10.272313] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:28:38.147 [2024-11-05 15:58:10.272326] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:28:38.147 [2024-11-05 15:58:10.272439] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:38.147 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.147 15:58:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:38.147 15:58:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:38.147 15:58:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:38.147 15:58:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:38.147 15:58:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:38.147 15:58:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:38.147 15:58:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:38.147 15:58:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:38.147 15:58:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:38.147 15:58:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:38.147 15:58:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:38.147 15:58:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:38.147 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.147 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:38.147 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.147 15:58:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:38.147 "name": "raid_bdev1", 00:28:38.147 "uuid": "05665e7e-0aef-43c7-a1b0-8bdf5f24fdc2", 00:28:38.147 "strip_size_kb": 0, 00:28:38.147 "state": "online", 00:28:38.147 "raid_level": "raid1", 00:28:38.147 "superblock": false, 00:28:38.147 "num_base_bdevs": 2, 00:28:38.147 "num_base_bdevs_discovered": 2, 00:28:38.147 "num_base_bdevs_operational": 2, 00:28:38.147 "base_bdevs_list": [ 00:28:38.147 { 00:28:38.147 "name": "BaseBdev1", 00:28:38.147 "uuid": "d04dbd11-9578-5541-8d40-288cdddce8ce", 00:28:38.147 "is_configured": true, 00:28:38.147 "data_offset": 0, 00:28:38.147 "data_size": 65536 00:28:38.147 }, 00:28:38.147 { 00:28:38.147 "name": "BaseBdev2", 00:28:38.147 "uuid": "0d2978cc-bec6-547e-8ac3-91cf93e74a15", 00:28:38.147 "is_configured": true, 00:28:38.147 "data_offset": 0, 00:28:38.147 "data_size": 65536 00:28:38.147 } 00:28:38.147 ] 00:28:38.147 }' 00:28:38.147 15:58:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:38.147 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:38.406 15:58:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:38.406 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.406 15:58:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:28:38.406 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:38.406 [2024-11-05 15:58:10.578728] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:38.406 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.406 15:58:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:28:38.406 15:58:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:38.406 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.406 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:38.406 15:58:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:38.406 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.406 15:58:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:28:38.406 15:58:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:28:38.406 15:58:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:28:38.406 15:58:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:28:38.406 15:58:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:28:38.406 15:58:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:28:38.406 15:58:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:28:38.406 15:58:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:38.406 15:58:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:38.406 15:58:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:38.406 15:58:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:28:38.406 15:58:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:38.406 15:58:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:38.406 15:58:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:28:38.664 [2024-11-05 15:58:10.826558] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:28:38.664 /dev/nbd0 00:28:38.664 15:58:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:38.664 15:58:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:38.664 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:28:38.664 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:28:38.664 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:28:38.664 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:28:38.664 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:28:38.664 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:28:38.664 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:28:38.664 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:28:38.664 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:38.664 1+0 records in 00:28:38.664 1+0 records out 00:28:38.664 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270767 s, 15.1 MB/s 00:28:38.664 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:38.664 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:28:38.664 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:38.664 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:28:38.664 15:58:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:28:38.664 15:58:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:38.664 15:58:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:38.664 15:58:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:28:38.664 15:58:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:28:38.664 15:58:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:28:42.846 65536+0 records in 00:28:42.846 65536+0 records out 00:28:42.846 33554432 bytes (34 MB, 32 MiB) copied, 3.78595 s, 8.9 MB/s 00:28:42.846 15:58:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:28:42.846 15:58:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:28:42.846 15:58:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:42.846 15:58:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:42.846 15:58:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:28:42.846 15:58:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:42.846 15:58:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:28:42.846 15:58:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:42.846 15:58:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:42.846 15:58:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:42.846 [2024-11-05 15:58:14.890119] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:42.846 15:58:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:42.846 15:58:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:42.846 15:58:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:42.846 15:58:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:28:42.846 15:58:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:28:42.846 15:58:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:28:42.846 15:58:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.846 15:58:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:42.846 [2024-11-05 15:58:14.898189] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:42.846 15:58:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.846 15:58:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:42.846 15:58:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:42.846 15:58:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:42.846 15:58:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:42.846 15:58:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:42.846 15:58:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:42.846 15:58:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:42.846 15:58:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:42.846 15:58:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:42.846 15:58:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:42.846 15:58:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:42.846 15:58:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.846 15:58:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:42.846 15:58:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:42.846 15:58:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.846 15:58:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:42.846 "name": "raid_bdev1", 00:28:42.846 "uuid": "05665e7e-0aef-43c7-a1b0-8bdf5f24fdc2", 00:28:42.846 "strip_size_kb": 0, 00:28:42.846 "state": "online", 00:28:42.846 "raid_level": "raid1", 00:28:42.846 "superblock": false, 00:28:42.846 "num_base_bdevs": 2, 00:28:42.846 "num_base_bdevs_discovered": 1, 00:28:42.846 "num_base_bdevs_operational": 1, 00:28:42.846 "base_bdevs_list": [ 00:28:42.846 { 00:28:42.846 "name": null, 00:28:42.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:42.846 "is_configured": false, 00:28:42.846 "data_offset": 0, 00:28:42.846 "data_size": 65536 00:28:42.846 }, 00:28:42.846 { 00:28:42.846 "name": "BaseBdev2", 00:28:42.846 "uuid": "0d2978cc-bec6-547e-8ac3-91cf93e74a15", 00:28:42.846 "is_configured": true, 00:28:42.846 "data_offset": 0, 00:28:42.846 "data_size": 65536 00:28:42.846 } 00:28:42.846 ] 00:28:42.846 }' 00:28:42.846 15:58:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:42.846 15:58:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:42.846 15:58:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:42.846 15:58:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.846 15:58:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:42.846 [2024-11-05 15:58:15.202262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:42.846 [2024-11-05 15:58:15.211665] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:28:42.846 15:58:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.846 15:58:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:28:42.846 [2024-11-05 15:58:15.213253] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:44.218 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:44.218 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:44.218 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:44.218 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:44.218 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:44.218 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:44.218 15:58:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.218 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:44.218 15:58:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:44.218 15:58:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.218 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:44.218 "name": "raid_bdev1", 00:28:44.218 "uuid": "05665e7e-0aef-43c7-a1b0-8bdf5f24fdc2", 00:28:44.218 "strip_size_kb": 0, 00:28:44.218 "state": "online", 00:28:44.218 "raid_level": "raid1", 00:28:44.218 "superblock": false, 00:28:44.218 "num_base_bdevs": 2, 00:28:44.218 "num_base_bdevs_discovered": 2, 00:28:44.218 "num_base_bdevs_operational": 2, 00:28:44.218 "process": { 00:28:44.218 "type": "rebuild", 00:28:44.218 "target": "spare", 00:28:44.218 "progress": { 00:28:44.218 "blocks": 20480, 00:28:44.218 "percent": 31 00:28:44.218 } 00:28:44.218 }, 00:28:44.218 "base_bdevs_list": [ 00:28:44.218 { 00:28:44.218 "name": "spare", 00:28:44.218 "uuid": "3e3fbda2-8efe-5522-9b5c-b6acbd965e18", 00:28:44.218 "is_configured": true, 00:28:44.218 "data_offset": 0, 00:28:44.218 "data_size": 65536 00:28:44.218 }, 00:28:44.218 { 00:28:44.218 "name": "BaseBdev2", 00:28:44.218 "uuid": "0d2978cc-bec6-547e-8ac3-91cf93e74a15", 00:28:44.218 "is_configured": true, 00:28:44.218 "data_offset": 0, 00:28:44.218 "data_size": 65536 00:28:44.218 } 00:28:44.218 ] 00:28:44.218 }' 00:28:44.218 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:44.218 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:44.218 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:44.218 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:44.218 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:28:44.218 15:58:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.218 15:58:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:44.218 [2024-11-05 15:58:16.343549] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:44.218 [2024-11-05 15:58:16.418917] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:44.219 [2024-11-05 15:58:16.418982] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:44.219 [2024-11-05 15:58:16.418994] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:44.219 [2024-11-05 15:58:16.419002] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:44.219 15:58:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.219 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:44.219 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:44.219 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:44.219 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:44.219 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:44.219 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:44.219 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:44.219 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:44.219 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:44.219 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:44.219 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:44.219 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:44.219 15:58:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.219 15:58:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:44.219 15:58:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.219 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:44.219 "name": "raid_bdev1", 00:28:44.219 "uuid": "05665e7e-0aef-43c7-a1b0-8bdf5f24fdc2", 00:28:44.219 "strip_size_kb": 0, 00:28:44.219 "state": "online", 00:28:44.219 "raid_level": "raid1", 00:28:44.219 "superblock": false, 00:28:44.219 "num_base_bdevs": 2, 00:28:44.219 "num_base_bdevs_discovered": 1, 00:28:44.219 "num_base_bdevs_operational": 1, 00:28:44.219 "base_bdevs_list": [ 00:28:44.219 { 00:28:44.219 "name": null, 00:28:44.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:44.219 "is_configured": false, 00:28:44.219 "data_offset": 0, 00:28:44.219 "data_size": 65536 00:28:44.219 }, 00:28:44.219 { 00:28:44.219 "name": "BaseBdev2", 00:28:44.219 "uuid": "0d2978cc-bec6-547e-8ac3-91cf93e74a15", 00:28:44.219 "is_configured": true, 00:28:44.219 "data_offset": 0, 00:28:44.219 "data_size": 65536 00:28:44.219 } 00:28:44.219 ] 00:28:44.219 }' 00:28:44.219 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:44.219 15:58:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:44.476 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:44.476 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:44.476 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:44.476 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:44.476 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:44.476 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:44.476 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:44.476 15:58:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.476 15:58:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:44.476 15:58:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.476 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:44.476 "name": "raid_bdev1", 00:28:44.476 "uuid": "05665e7e-0aef-43c7-a1b0-8bdf5f24fdc2", 00:28:44.476 "strip_size_kb": 0, 00:28:44.476 "state": "online", 00:28:44.476 "raid_level": "raid1", 00:28:44.476 "superblock": false, 00:28:44.476 "num_base_bdevs": 2, 00:28:44.476 "num_base_bdevs_discovered": 1, 00:28:44.476 "num_base_bdevs_operational": 1, 00:28:44.476 "base_bdevs_list": [ 00:28:44.476 { 00:28:44.476 "name": null, 00:28:44.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:44.476 "is_configured": false, 00:28:44.476 "data_offset": 0, 00:28:44.476 "data_size": 65536 00:28:44.476 }, 00:28:44.476 { 00:28:44.476 "name": "BaseBdev2", 00:28:44.476 "uuid": "0d2978cc-bec6-547e-8ac3-91cf93e74a15", 00:28:44.476 "is_configured": true, 00:28:44.476 "data_offset": 0, 00:28:44.476 "data_size": 65536 00:28:44.476 } 00:28:44.476 ] 00:28:44.476 }' 00:28:44.476 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:44.476 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:44.476 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:44.476 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:44.476 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:44.476 15:58:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.476 15:58:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:44.476 [2024-11-05 15:58:16.857422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:44.476 [2024-11-05 15:58:16.866657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:28:44.476 15:58:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.476 15:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:28:44.476 [2024-11-05 15:58:16.868290] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:45.847 15:58:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:45.847 15:58:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:45.847 15:58:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:45.847 15:58:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:45.847 15:58:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:45.847 15:58:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:45.847 15:58:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:45.847 15:58:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.847 15:58:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:45.848 15:58:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.848 15:58:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:45.848 "name": "raid_bdev1", 00:28:45.848 "uuid": "05665e7e-0aef-43c7-a1b0-8bdf5f24fdc2", 00:28:45.848 "strip_size_kb": 0, 00:28:45.848 "state": "online", 00:28:45.848 "raid_level": "raid1", 00:28:45.848 "superblock": false, 00:28:45.848 "num_base_bdevs": 2, 00:28:45.848 "num_base_bdevs_discovered": 2, 00:28:45.848 "num_base_bdevs_operational": 2, 00:28:45.848 "process": { 00:28:45.848 "type": "rebuild", 00:28:45.848 "target": "spare", 00:28:45.848 "progress": { 00:28:45.848 "blocks": 20480, 00:28:45.848 "percent": 31 00:28:45.848 } 00:28:45.848 }, 00:28:45.848 "base_bdevs_list": [ 00:28:45.848 { 00:28:45.848 "name": "spare", 00:28:45.848 "uuid": "3e3fbda2-8efe-5522-9b5c-b6acbd965e18", 00:28:45.848 "is_configured": true, 00:28:45.848 "data_offset": 0, 00:28:45.848 "data_size": 65536 00:28:45.848 }, 00:28:45.848 { 00:28:45.848 "name": "BaseBdev2", 00:28:45.848 "uuid": "0d2978cc-bec6-547e-8ac3-91cf93e74a15", 00:28:45.848 "is_configured": true, 00:28:45.848 "data_offset": 0, 00:28:45.848 "data_size": 65536 00:28:45.848 } 00:28:45.848 ] 00:28:45.848 }' 00:28:45.848 15:58:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:45.848 15:58:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:45.848 15:58:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:45.848 15:58:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:45.848 15:58:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:28:45.848 15:58:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:28:45.848 15:58:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:28:45.848 15:58:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:28:45.848 15:58:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=273 00:28:45.848 15:58:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:45.848 15:58:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:45.848 15:58:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:45.848 15:58:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:45.848 15:58:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:45.848 15:58:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:45.848 15:58:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:45.848 15:58:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.848 15:58:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:45.848 15:58:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:45.848 15:58:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.848 15:58:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:45.848 "name": "raid_bdev1", 00:28:45.848 "uuid": "05665e7e-0aef-43c7-a1b0-8bdf5f24fdc2", 00:28:45.848 "strip_size_kb": 0, 00:28:45.848 "state": "online", 00:28:45.848 "raid_level": "raid1", 00:28:45.848 "superblock": false, 00:28:45.848 "num_base_bdevs": 2, 00:28:45.848 "num_base_bdevs_discovered": 2, 00:28:45.848 "num_base_bdevs_operational": 2, 00:28:45.848 "process": { 00:28:45.848 "type": "rebuild", 00:28:45.848 "target": "spare", 00:28:45.848 "progress": { 00:28:45.848 "blocks": 22528, 00:28:45.848 "percent": 34 00:28:45.848 } 00:28:45.848 }, 00:28:45.848 "base_bdevs_list": [ 00:28:45.848 { 00:28:45.848 "name": "spare", 00:28:45.848 "uuid": "3e3fbda2-8efe-5522-9b5c-b6acbd965e18", 00:28:45.848 "is_configured": true, 00:28:45.848 "data_offset": 0, 00:28:45.848 "data_size": 65536 00:28:45.848 }, 00:28:45.848 { 00:28:45.848 "name": "BaseBdev2", 00:28:45.848 "uuid": "0d2978cc-bec6-547e-8ac3-91cf93e74a15", 00:28:45.848 "is_configured": true, 00:28:45.848 "data_offset": 0, 00:28:45.848 "data_size": 65536 00:28:45.848 } 00:28:45.848 ] 00:28:45.848 }' 00:28:45.848 15:58:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:45.848 15:58:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:45.848 15:58:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:45.848 15:58:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:45.848 15:58:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:46.780 15:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:46.780 15:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:46.780 15:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:46.780 15:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:46.780 15:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:46.780 15:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:46.780 15:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:46.780 15:58:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.780 15:58:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:46.780 15:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:46.780 15:58:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.780 15:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:46.780 "name": "raid_bdev1", 00:28:46.780 "uuid": "05665e7e-0aef-43c7-a1b0-8bdf5f24fdc2", 00:28:46.780 "strip_size_kb": 0, 00:28:46.780 "state": "online", 00:28:46.780 "raid_level": "raid1", 00:28:46.780 "superblock": false, 00:28:46.780 "num_base_bdevs": 2, 00:28:46.780 "num_base_bdevs_discovered": 2, 00:28:46.780 "num_base_bdevs_operational": 2, 00:28:46.780 "process": { 00:28:46.780 "type": "rebuild", 00:28:46.780 "target": "spare", 00:28:46.780 "progress": { 00:28:46.780 "blocks": 45056, 00:28:46.780 "percent": 68 00:28:46.780 } 00:28:46.780 }, 00:28:46.780 "base_bdevs_list": [ 00:28:46.780 { 00:28:46.780 "name": "spare", 00:28:46.780 "uuid": "3e3fbda2-8efe-5522-9b5c-b6acbd965e18", 00:28:46.780 "is_configured": true, 00:28:46.780 "data_offset": 0, 00:28:46.780 "data_size": 65536 00:28:46.780 }, 00:28:46.780 { 00:28:46.780 "name": "BaseBdev2", 00:28:46.780 "uuid": "0d2978cc-bec6-547e-8ac3-91cf93e74a15", 00:28:46.780 "is_configured": true, 00:28:46.780 "data_offset": 0, 00:28:46.780 "data_size": 65536 00:28:46.780 } 00:28:46.780 ] 00:28:46.780 }' 00:28:46.780 15:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:46.780 15:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:46.781 15:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:46.781 15:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:46.781 15:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:47.728 [2024-11-05 15:58:20.083160] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:47.728 [2024-11-05 15:58:20.083442] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:47.728 [2024-11-05 15:58:20.083505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:47.985 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:47.985 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:47.985 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:47.985 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:47.985 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:47.985 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:47.985 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:47.985 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:47.985 15:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.985 15:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:47.985 15:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.985 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:47.985 "name": "raid_bdev1", 00:28:47.985 "uuid": "05665e7e-0aef-43c7-a1b0-8bdf5f24fdc2", 00:28:47.985 "strip_size_kb": 0, 00:28:47.985 "state": "online", 00:28:47.985 "raid_level": "raid1", 00:28:47.985 "superblock": false, 00:28:47.985 "num_base_bdevs": 2, 00:28:47.985 "num_base_bdevs_discovered": 2, 00:28:47.985 "num_base_bdevs_operational": 2, 00:28:47.985 "base_bdevs_list": [ 00:28:47.985 { 00:28:47.985 "name": "spare", 00:28:47.985 "uuid": "3e3fbda2-8efe-5522-9b5c-b6acbd965e18", 00:28:47.985 "is_configured": true, 00:28:47.985 "data_offset": 0, 00:28:47.985 "data_size": 65536 00:28:47.985 }, 00:28:47.985 { 00:28:47.985 "name": "BaseBdev2", 00:28:47.985 "uuid": "0d2978cc-bec6-547e-8ac3-91cf93e74a15", 00:28:47.985 "is_configured": true, 00:28:47.985 "data_offset": 0, 00:28:47.985 "data_size": 65536 00:28:47.985 } 00:28:47.985 ] 00:28:47.985 }' 00:28:47.985 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:47.985 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:47.985 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:47.985 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:28:47.985 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:28:47.985 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:47.986 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:47.986 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:47.986 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:47.986 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:47.986 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:47.986 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:47.986 15:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.986 15:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:47.986 15:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.986 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:47.986 "name": "raid_bdev1", 00:28:47.986 "uuid": "05665e7e-0aef-43c7-a1b0-8bdf5f24fdc2", 00:28:47.986 "strip_size_kb": 0, 00:28:47.986 "state": "online", 00:28:47.986 "raid_level": "raid1", 00:28:47.986 "superblock": false, 00:28:47.986 "num_base_bdevs": 2, 00:28:47.986 "num_base_bdevs_discovered": 2, 00:28:47.986 "num_base_bdevs_operational": 2, 00:28:47.986 "base_bdevs_list": [ 00:28:47.986 { 00:28:47.986 "name": "spare", 00:28:47.986 "uuid": "3e3fbda2-8efe-5522-9b5c-b6acbd965e18", 00:28:47.986 "is_configured": true, 00:28:47.986 "data_offset": 0, 00:28:47.986 "data_size": 65536 00:28:47.986 }, 00:28:47.986 { 00:28:47.986 "name": "BaseBdev2", 00:28:47.986 "uuid": "0d2978cc-bec6-547e-8ac3-91cf93e74a15", 00:28:47.986 "is_configured": true, 00:28:47.986 "data_offset": 0, 00:28:47.986 "data_size": 65536 00:28:47.986 } 00:28:47.986 ] 00:28:47.986 }' 00:28:47.986 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:47.986 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:47.986 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:47.986 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:47.986 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:47.986 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:47.986 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:47.986 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:47.986 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:47.986 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:47.986 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:47.986 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:47.986 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:47.986 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:47.986 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:47.986 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:47.986 15:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.986 15:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:48.243 15:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.243 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:48.243 "name": "raid_bdev1", 00:28:48.243 "uuid": "05665e7e-0aef-43c7-a1b0-8bdf5f24fdc2", 00:28:48.243 "strip_size_kb": 0, 00:28:48.243 "state": "online", 00:28:48.243 "raid_level": "raid1", 00:28:48.243 "superblock": false, 00:28:48.243 "num_base_bdevs": 2, 00:28:48.243 "num_base_bdevs_discovered": 2, 00:28:48.243 "num_base_bdevs_operational": 2, 00:28:48.243 "base_bdevs_list": [ 00:28:48.243 { 00:28:48.243 "name": "spare", 00:28:48.243 "uuid": "3e3fbda2-8efe-5522-9b5c-b6acbd965e18", 00:28:48.243 "is_configured": true, 00:28:48.243 "data_offset": 0, 00:28:48.243 "data_size": 65536 00:28:48.243 }, 00:28:48.243 { 00:28:48.243 "name": "BaseBdev2", 00:28:48.243 "uuid": "0d2978cc-bec6-547e-8ac3-91cf93e74a15", 00:28:48.243 "is_configured": true, 00:28:48.243 "data_offset": 0, 00:28:48.243 "data_size": 65536 00:28:48.243 } 00:28:48.243 ] 00:28:48.243 }' 00:28:48.243 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:48.243 15:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:48.500 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:48.500 15:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.500 15:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:48.500 [2024-11-05 15:58:20.707875] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:48.500 [2024-11-05 15:58:20.707902] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:48.500 [2024-11-05 15:58:20.707966] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:48.500 [2024-11-05 15:58:20.708021] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:48.500 [2024-11-05 15:58:20.708029] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:28:48.500 15:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.500 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:48.500 15:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.500 15:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:48.500 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:28:48.500 15:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.500 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:28:48.500 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:28:48.500 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:28:48.500 15:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:28:48.500 15:58:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:28:48.500 15:58:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:28:48.500 15:58:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:48.500 15:58:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:48.500 15:58:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:48.500 15:58:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:28:48.500 15:58:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:48.500 15:58:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:48.500 15:58:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:28:48.757 /dev/nbd0 00:28:48.757 15:58:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:48.757 15:58:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:48.757 15:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:28:48.757 15:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:28:48.757 15:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:28:48.757 15:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:28:48.757 15:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:28:48.757 15:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:28:48.757 15:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:28:48.757 15:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:28:48.757 15:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:48.757 1+0 records in 00:28:48.757 1+0 records out 00:28:48.757 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000493372 s, 8.3 MB/s 00:28:48.757 15:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:48.757 15:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:28:48.757 15:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:48.757 15:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:28:48.757 15:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:28:48.757 15:58:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:48.757 15:58:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:48.757 15:58:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:28:49.014 /dev/nbd1 00:28:49.014 15:58:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:49.014 15:58:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:49.014 15:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:28:49.014 15:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:28:49.014 15:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:28:49.014 15:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:28:49.014 15:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:28:49.014 15:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:28:49.014 15:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:28:49.014 15:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:28:49.014 15:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:49.014 1+0 records in 00:28:49.014 1+0 records out 00:28:49.014 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311226 s, 13.2 MB/s 00:28:49.015 15:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:49.015 15:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:28:49.015 15:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:49.015 15:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:28:49.015 15:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:28:49.015 15:58:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:49.015 15:58:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:49.015 15:58:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:28:49.015 15:58:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:28:49.015 15:58:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:28:49.015 15:58:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:49.015 15:58:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:49.015 15:58:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:28:49.015 15:58:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:49.015 15:58:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:28:49.272 15:58:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:49.272 15:58:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:49.272 15:58:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:49.272 15:58:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:49.272 15:58:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:49.272 15:58:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:49.272 15:58:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:28:49.272 15:58:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:28:49.272 15:58:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:49.272 15:58:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:28:49.529 15:58:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:49.529 15:58:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:49.529 15:58:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:49.529 15:58:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:49.529 15:58:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:49.529 15:58:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:49.529 15:58:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:28:49.529 15:58:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:28:49.529 15:58:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:28:49.529 15:58:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 73034 00:28:49.529 15:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 73034 ']' 00:28:49.529 15:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 73034 00:28:49.529 15:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:28:49.529 15:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:49.529 15:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73034 00:28:49.529 15:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:49.529 killing process with pid 73034 00:28:49.529 15:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:49.529 15:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73034' 00:28:49.529 15:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 73034 00:28:49.529 Received shutdown signal, test time was about 60.000000 seconds 00:28:49.529 00:28:49.529 Latency(us) 00:28:49.529 [2024-11-05T15:58:21.944Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.529 [2024-11-05T15:58:21.944Z] =================================================================================================================== 00:28:49.529 [2024-11-05T15:58:21.944Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:49.529 15:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 73034 00:28:49.529 [2024-11-05 15:58:21.802800] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:49.786 [2024-11-05 15:58:21.949304] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:50.352 15:58:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:28:50.352 00:28:50.352 real 0m13.333s 00:28:50.352 user 0m15.135s 00:28:50.352 sys 0m2.360s 00:28:50.352 15:58:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:50.352 15:58:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:50.352 ************************************ 00:28:50.352 END TEST raid_rebuild_test 00:28:50.352 ************************************ 00:28:50.352 15:58:22 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:28:50.352 15:58:22 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:28:50.352 15:58:22 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:50.352 15:58:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:50.352 ************************************ 00:28:50.352 START TEST raid_rebuild_test_sb 00:28:50.352 ************************************ 00:28:50.352 15:58:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:28:50.352 15:58:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:28:50.352 15:58:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:28:50.352 15:58:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:28:50.352 15:58:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:28:50.352 15:58:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:28:50.352 15:58:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:28:50.352 15:58:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:50.352 15:58:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:28:50.352 15:58:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:50.352 15:58:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:50.352 15:58:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:28:50.352 15:58:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:50.352 15:58:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:50.352 15:58:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:50.352 15:58:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:28:50.352 15:58:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:28:50.352 15:58:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:28:50.352 15:58:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:28:50.352 15:58:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:28:50.352 15:58:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:28:50.352 15:58:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:28:50.352 15:58:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:28:50.352 15:58:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:28:50.352 15:58:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:28:50.352 15:58:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=73435 00:28:50.352 15:58:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 73435 00:28:50.352 15:58:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 73435 ']' 00:28:50.352 15:58:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:50.352 15:58:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:50.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:50.352 15:58:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:50.352 15:58:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:50.352 15:58:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:50.352 15:58:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:50.352 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:50.352 Zero copy mechanism will not be used. 00:28:50.352 [2024-11-05 15:58:22.607133] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:28:50.352 [2024-11-05 15:58:22.607255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73435 ] 00:28:50.352 [2024-11-05 15:58:22.767222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.610 [2024-11-05 15:58:22.867142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:50.610 [2024-11-05 15:58:23.001739] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:50.610 [2024-11-05 15:58:23.001786] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:51.176 BaseBdev1_malloc 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:51.176 [2024-11-05 15:58:23.445503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:51.176 [2024-11-05 15:58:23.445566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:51.176 [2024-11-05 15:58:23.445588] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:28:51.176 [2024-11-05 15:58:23.445599] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:51.176 [2024-11-05 15:58:23.447785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:51.176 [2024-11-05 15:58:23.447825] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:51.176 BaseBdev1 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:51.176 BaseBdev2_malloc 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:51.176 [2024-11-05 15:58:23.481385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:51.176 [2024-11-05 15:58:23.481438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:51.176 [2024-11-05 15:58:23.481455] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:28:51.176 [2024-11-05 15:58:23.481469] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:51.176 [2024-11-05 15:58:23.483600] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:51.176 [2024-11-05 15:58:23.483637] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:51.176 BaseBdev2 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:51.176 spare_malloc 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:51.176 spare_delay 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:51.176 [2024-11-05 15:58:23.544624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:51.176 [2024-11-05 15:58:23.544683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:51.176 [2024-11-05 15:58:23.544701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:28:51.176 [2024-11-05 15:58:23.544712] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:51.176 [2024-11-05 15:58:23.546819] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:51.176 [2024-11-05 15:58:23.546867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:51.176 spare 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:51.176 [2024-11-05 15:58:23.552686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:51.176 [2024-11-05 15:58:23.554500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:51.176 [2024-11-05 15:58:23.554682] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:28:51.176 [2024-11-05 15:58:23.554703] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:51.176 [2024-11-05 15:58:23.554970] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:28:51.176 [2024-11-05 15:58:23.555125] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:28:51.176 [2024-11-05 15:58:23.555140] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:28:51.176 [2024-11-05 15:58:23.555281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:51.176 15:58:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.456 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:51.456 "name": "raid_bdev1", 00:28:51.456 "uuid": "2c9a3774-a929-4fae-a849-8dbf2513652f", 00:28:51.456 "strip_size_kb": 0, 00:28:51.456 "state": "online", 00:28:51.456 "raid_level": "raid1", 00:28:51.456 "superblock": true, 00:28:51.456 "num_base_bdevs": 2, 00:28:51.456 "num_base_bdevs_discovered": 2, 00:28:51.456 "num_base_bdevs_operational": 2, 00:28:51.456 "base_bdevs_list": [ 00:28:51.456 { 00:28:51.456 "name": "BaseBdev1", 00:28:51.456 "uuid": "c3fd22a4-3155-5e7c-b54e-ef453d7edb82", 00:28:51.456 "is_configured": true, 00:28:51.456 "data_offset": 2048, 00:28:51.456 "data_size": 63488 00:28:51.456 }, 00:28:51.456 { 00:28:51.456 "name": "BaseBdev2", 00:28:51.456 "uuid": "2e3b2390-4958-5bfb-bc7e-a17342fc51f8", 00:28:51.456 "is_configured": true, 00:28:51.456 "data_offset": 2048, 00:28:51.456 "data_size": 63488 00:28:51.456 } 00:28:51.456 ] 00:28:51.456 }' 00:28:51.456 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:51.456 15:58:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:51.714 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:51.714 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:28:51.714 15:58:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.714 15:58:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:51.714 [2024-11-05 15:58:23.893058] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:51.714 15:58:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.714 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:28:51.714 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:51.714 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:51.714 15:58:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.714 15:58:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:51.714 15:58:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.714 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:28:51.714 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:28:51.714 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:28:51.714 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:28:51.714 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:28:51.715 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:28:51.715 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:28:51.715 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:51.715 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:51.715 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:51.715 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:28:51.715 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:51.715 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:51.715 15:58:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:28:51.972 [2024-11-05 15:58:24.136838] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:28:51.972 /dev/nbd0 00:28:51.972 15:58:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:51.972 15:58:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:51.972 15:58:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:28:51.972 15:58:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:28:51.973 15:58:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:28:51.973 15:58:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:28:51.973 15:58:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:28:51.973 15:58:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:28:51.973 15:58:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:28:51.973 15:58:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:28:51.973 15:58:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:51.973 1+0 records in 00:28:51.973 1+0 records out 00:28:51.973 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228651 s, 17.9 MB/s 00:28:51.973 15:58:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:51.973 15:58:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:28:51.973 15:58:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:51.973 15:58:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:28:51.973 15:58:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:28:51.973 15:58:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:51.973 15:58:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:51.973 15:58:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:28:51.973 15:58:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:28:51.973 15:58:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:28:56.184 63488+0 records in 00:28:56.184 63488+0 records out 00:28:56.184 32505856 bytes (33 MB, 31 MiB) copied, 4.33067 s, 7.5 MB/s 00:28:56.184 15:58:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:28:56.184 15:58:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:28:56.184 15:58:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:56.184 15:58:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:56.184 15:58:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:28:56.184 15:58:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:56.184 15:58:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:28:56.440 [2024-11-05 15:58:28.722195] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:56.440 15:58:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:56.440 15:58:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:56.440 15:58:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:56.440 15:58:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:56.440 15:58:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:56.440 15:58:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:56.440 15:58:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:28:56.440 15:58:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:28:56.440 15:58:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:28:56.440 15:58:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.440 15:58:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:56.440 [2024-11-05 15:58:28.746735] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:56.440 15:58:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.440 15:58:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:56.440 15:58:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:56.440 15:58:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:56.440 15:58:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:56.440 15:58:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:56.440 15:58:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:56.440 15:58:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:56.440 15:58:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:56.440 15:58:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:56.440 15:58:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:56.440 15:58:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:56.440 15:58:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.440 15:58:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:56.440 15:58:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:56.440 15:58:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.440 15:58:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:56.440 "name": "raid_bdev1", 00:28:56.440 "uuid": "2c9a3774-a929-4fae-a849-8dbf2513652f", 00:28:56.440 "strip_size_kb": 0, 00:28:56.440 "state": "online", 00:28:56.440 "raid_level": "raid1", 00:28:56.440 "superblock": true, 00:28:56.440 "num_base_bdevs": 2, 00:28:56.440 "num_base_bdevs_discovered": 1, 00:28:56.440 "num_base_bdevs_operational": 1, 00:28:56.440 "base_bdevs_list": [ 00:28:56.440 { 00:28:56.440 "name": null, 00:28:56.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:56.440 "is_configured": false, 00:28:56.440 "data_offset": 0, 00:28:56.440 "data_size": 63488 00:28:56.440 }, 00:28:56.440 { 00:28:56.440 "name": "BaseBdev2", 00:28:56.440 "uuid": "2e3b2390-4958-5bfb-bc7e-a17342fc51f8", 00:28:56.440 "is_configured": true, 00:28:56.440 "data_offset": 2048, 00:28:56.441 "data_size": 63488 00:28:56.441 } 00:28:56.441 ] 00:28:56.441 }' 00:28:56.441 15:58:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:56.441 15:58:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:56.715 15:58:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:56.715 15:58:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.715 15:58:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:56.715 [2024-11-05 15:58:29.062797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:56.715 [2024-11-05 15:58:29.072453] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:28:56.715 15:58:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.715 15:58:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:28:56.715 [2024-11-05 15:58:29.074053] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:58.088 "name": "raid_bdev1", 00:28:58.088 "uuid": "2c9a3774-a929-4fae-a849-8dbf2513652f", 00:28:58.088 "strip_size_kb": 0, 00:28:58.088 "state": "online", 00:28:58.088 "raid_level": "raid1", 00:28:58.088 "superblock": true, 00:28:58.088 "num_base_bdevs": 2, 00:28:58.088 "num_base_bdevs_discovered": 2, 00:28:58.088 "num_base_bdevs_operational": 2, 00:28:58.088 "process": { 00:28:58.088 "type": "rebuild", 00:28:58.088 "target": "spare", 00:28:58.088 "progress": { 00:28:58.088 "blocks": 20480, 00:28:58.088 "percent": 32 00:28:58.088 } 00:28:58.088 }, 00:28:58.088 "base_bdevs_list": [ 00:28:58.088 { 00:28:58.088 "name": "spare", 00:28:58.088 "uuid": "83e7ad3a-8715-54f2-b168-94ea23f46d01", 00:28:58.088 "is_configured": true, 00:28:58.088 "data_offset": 2048, 00:28:58.088 "data_size": 63488 00:28:58.088 }, 00:28:58.088 { 00:28:58.088 "name": "BaseBdev2", 00:28:58.088 "uuid": "2e3b2390-4958-5bfb-bc7e-a17342fc51f8", 00:28:58.088 "is_configured": true, 00:28:58.088 "data_offset": 2048, 00:28:58.088 "data_size": 63488 00:28:58.088 } 00:28:58.088 ] 00:28:58.088 }' 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:58.088 [2024-11-05 15:58:30.168367] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:58.088 [2024-11-05 15:58:30.179199] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:58.088 [2024-11-05 15:58:30.179256] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:58.088 [2024-11-05 15:58:30.179268] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:58.088 [2024-11-05 15:58:30.179279] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:58.088 "name": "raid_bdev1", 00:28:58.088 "uuid": "2c9a3774-a929-4fae-a849-8dbf2513652f", 00:28:58.088 "strip_size_kb": 0, 00:28:58.088 "state": "online", 00:28:58.088 "raid_level": "raid1", 00:28:58.088 "superblock": true, 00:28:58.088 "num_base_bdevs": 2, 00:28:58.088 "num_base_bdevs_discovered": 1, 00:28:58.088 "num_base_bdevs_operational": 1, 00:28:58.088 "base_bdevs_list": [ 00:28:58.088 { 00:28:58.088 "name": null, 00:28:58.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:58.088 "is_configured": false, 00:28:58.088 "data_offset": 0, 00:28:58.088 "data_size": 63488 00:28:58.088 }, 00:28:58.088 { 00:28:58.088 "name": "BaseBdev2", 00:28:58.088 "uuid": "2e3b2390-4958-5bfb-bc7e-a17342fc51f8", 00:28:58.088 "is_configured": true, 00:28:58.088 "data_offset": 2048, 00:28:58.088 "data_size": 63488 00:28:58.088 } 00:28:58.088 ] 00:28:58.088 }' 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:58.088 15:58:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.349 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:58.349 "name": "raid_bdev1", 00:28:58.349 "uuid": "2c9a3774-a929-4fae-a849-8dbf2513652f", 00:28:58.349 "strip_size_kb": 0, 00:28:58.349 "state": "online", 00:28:58.349 "raid_level": "raid1", 00:28:58.349 "superblock": true, 00:28:58.349 "num_base_bdevs": 2, 00:28:58.349 "num_base_bdevs_discovered": 1, 00:28:58.349 "num_base_bdevs_operational": 1, 00:28:58.349 "base_bdevs_list": [ 00:28:58.349 { 00:28:58.349 "name": null, 00:28:58.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:58.349 "is_configured": false, 00:28:58.349 "data_offset": 0, 00:28:58.349 "data_size": 63488 00:28:58.349 }, 00:28:58.349 { 00:28:58.349 "name": "BaseBdev2", 00:28:58.349 "uuid": "2e3b2390-4958-5bfb-bc7e-a17342fc51f8", 00:28:58.349 "is_configured": true, 00:28:58.349 "data_offset": 2048, 00:28:58.349 "data_size": 63488 00:28:58.349 } 00:28:58.349 ] 00:28:58.349 }' 00:28:58.349 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:58.349 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:58.349 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:58.349 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:58.349 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:58.349 15:58:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.349 15:58:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:58.349 [2024-11-05 15:58:30.582191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:58.349 [2024-11-05 15:58:30.591150] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:28:58.349 15:58:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.349 15:58:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:28:58.349 [2024-11-05 15:58:30.592704] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:59.287 15:58:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:59.287 15:58:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:59.287 15:58:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:59.287 15:58:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:59.287 15:58:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:59.287 15:58:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:59.287 15:58:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.287 15:58:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:59.287 15:58:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:59.287 15:58:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.287 15:58:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:59.287 "name": "raid_bdev1", 00:28:59.287 "uuid": "2c9a3774-a929-4fae-a849-8dbf2513652f", 00:28:59.287 "strip_size_kb": 0, 00:28:59.287 "state": "online", 00:28:59.287 "raid_level": "raid1", 00:28:59.287 "superblock": true, 00:28:59.287 "num_base_bdevs": 2, 00:28:59.287 "num_base_bdevs_discovered": 2, 00:28:59.287 "num_base_bdevs_operational": 2, 00:28:59.287 "process": { 00:28:59.287 "type": "rebuild", 00:28:59.287 "target": "spare", 00:28:59.287 "progress": { 00:28:59.287 "blocks": 20480, 00:28:59.287 "percent": 32 00:28:59.287 } 00:28:59.287 }, 00:28:59.287 "base_bdevs_list": [ 00:28:59.287 { 00:28:59.287 "name": "spare", 00:28:59.287 "uuid": "83e7ad3a-8715-54f2-b168-94ea23f46d01", 00:28:59.287 "is_configured": true, 00:28:59.287 "data_offset": 2048, 00:28:59.287 "data_size": 63488 00:28:59.287 }, 00:28:59.287 { 00:28:59.287 "name": "BaseBdev2", 00:28:59.287 "uuid": "2e3b2390-4958-5bfb-bc7e-a17342fc51f8", 00:28:59.287 "is_configured": true, 00:28:59.287 "data_offset": 2048, 00:28:59.287 "data_size": 63488 00:28:59.287 } 00:28:59.287 ] 00:28:59.287 }' 00:28:59.287 15:58:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:59.287 15:58:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:59.287 15:58:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:59.287 15:58:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:59.287 15:58:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:28:59.287 15:58:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:28:59.287 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:28:59.287 15:58:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:28:59.287 15:58:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:28:59.287 15:58:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:28:59.287 15:58:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=287 00:28:59.287 15:58:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:59.287 15:58:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:59.287 15:58:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:59.287 15:58:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:59.287 15:58:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:59.287 15:58:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:59.287 15:58:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:59.287 15:58:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.287 15:58:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:59.287 15:58:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:59.287 15:58:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.545 15:58:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:59.545 "name": "raid_bdev1", 00:28:59.545 "uuid": "2c9a3774-a929-4fae-a849-8dbf2513652f", 00:28:59.545 "strip_size_kb": 0, 00:28:59.545 "state": "online", 00:28:59.545 "raid_level": "raid1", 00:28:59.545 "superblock": true, 00:28:59.545 "num_base_bdevs": 2, 00:28:59.545 "num_base_bdevs_discovered": 2, 00:28:59.545 "num_base_bdevs_operational": 2, 00:28:59.545 "process": { 00:28:59.545 "type": "rebuild", 00:28:59.545 "target": "spare", 00:28:59.545 "progress": { 00:28:59.545 "blocks": 20480, 00:28:59.545 "percent": 32 00:28:59.545 } 00:28:59.545 }, 00:28:59.545 "base_bdevs_list": [ 00:28:59.545 { 00:28:59.545 "name": "spare", 00:28:59.545 "uuid": "83e7ad3a-8715-54f2-b168-94ea23f46d01", 00:28:59.545 "is_configured": true, 00:28:59.545 "data_offset": 2048, 00:28:59.545 "data_size": 63488 00:28:59.545 }, 00:28:59.545 { 00:28:59.545 "name": "BaseBdev2", 00:28:59.545 "uuid": "2e3b2390-4958-5bfb-bc7e-a17342fc51f8", 00:28:59.545 "is_configured": true, 00:28:59.545 "data_offset": 2048, 00:28:59.545 "data_size": 63488 00:28:59.545 } 00:28:59.545 ] 00:28:59.545 }' 00:28:59.545 15:58:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:59.545 15:58:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:59.545 15:58:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:59.545 15:58:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:59.545 15:58:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:00.484 15:58:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:00.484 15:58:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:00.484 15:58:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:00.484 15:58:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:00.484 15:58:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:00.484 15:58:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:00.484 15:58:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:00.484 15:58:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.484 15:58:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:00.484 15:58:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:00.484 15:58:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.484 15:58:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:00.484 "name": "raid_bdev1", 00:29:00.484 "uuid": "2c9a3774-a929-4fae-a849-8dbf2513652f", 00:29:00.484 "strip_size_kb": 0, 00:29:00.484 "state": "online", 00:29:00.484 "raid_level": "raid1", 00:29:00.484 "superblock": true, 00:29:00.484 "num_base_bdevs": 2, 00:29:00.484 "num_base_bdevs_discovered": 2, 00:29:00.484 "num_base_bdevs_operational": 2, 00:29:00.484 "process": { 00:29:00.484 "type": "rebuild", 00:29:00.484 "target": "spare", 00:29:00.484 "progress": { 00:29:00.484 "blocks": 43008, 00:29:00.484 "percent": 67 00:29:00.484 } 00:29:00.484 }, 00:29:00.484 "base_bdevs_list": [ 00:29:00.484 { 00:29:00.484 "name": "spare", 00:29:00.485 "uuid": "83e7ad3a-8715-54f2-b168-94ea23f46d01", 00:29:00.485 "is_configured": true, 00:29:00.485 "data_offset": 2048, 00:29:00.485 "data_size": 63488 00:29:00.485 }, 00:29:00.485 { 00:29:00.485 "name": "BaseBdev2", 00:29:00.485 "uuid": "2e3b2390-4958-5bfb-bc7e-a17342fc51f8", 00:29:00.485 "is_configured": true, 00:29:00.485 "data_offset": 2048, 00:29:00.485 "data_size": 63488 00:29:00.485 } 00:29:00.485 ] 00:29:00.485 }' 00:29:00.485 15:58:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:00.485 15:58:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:00.485 15:58:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:00.485 15:58:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:00.485 15:58:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:01.425 [2024-11-05 15:58:33.706676] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:01.425 [2024-11-05 15:58:33.706747] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:01.425 [2024-11-05 15:58:33.706853] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:01.685 15:58:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:01.685 15:58:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:01.685 15:58:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:01.685 15:58:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:01.685 15:58:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:01.686 15:58:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:01.686 15:58:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:01.686 15:58:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:01.686 15:58:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.686 15:58:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:01.686 15:58:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.686 15:58:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:01.686 "name": "raid_bdev1", 00:29:01.686 "uuid": "2c9a3774-a929-4fae-a849-8dbf2513652f", 00:29:01.686 "strip_size_kb": 0, 00:29:01.686 "state": "online", 00:29:01.686 "raid_level": "raid1", 00:29:01.686 "superblock": true, 00:29:01.686 "num_base_bdevs": 2, 00:29:01.686 "num_base_bdevs_discovered": 2, 00:29:01.686 "num_base_bdevs_operational": 2, 00:29:01.686 "base_bdevs_list": [ 00:29:01.686 { 00:29:01.686 "name": "spare", 00:29:01.686 "uuid": "83e7ad3a-8715-54f2-b168-94ea23f46d01", 00:29:01.686 "is_configured": true, 00:29:01.686 "data_offset": 2048, 00:29:01.686 "data_size": 63488 00:29:01.686 }, 00:29:01.686 { 00:29:01.686 "name": "BaseBdev2", 00:29:01.686 "uuid": "2e3b2390-4958-5bfb-bc7e-a17342fc51f8", 00:29:01.686 "is_configured": true, 00:29:01.686 "data_offset": 2048, 00:29:01.686 "data_size": 63488 00:29:01.686 } 00:29:01.686 ] 00:29:01.686 }' 00:29:01.686 15:58:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:01.686 15:58:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:01.686 15:58:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:01.686 15:58:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:29:01.686 15:58:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:29:01.686 15:58:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:01.686 15:58:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:01.686 15:58:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:01.686 15:58:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:01.686 15:58:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:01.686 15:58:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:01.686 15:58:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:01.686 15:58:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.686 15:58:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:01.686 15:58:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.686 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:01.686 "name": "raid_bdev1", 00:29:01.686 "uuid": "2c9a3774-a929-4fae-a849-8dbf2513652f", 00:29:01.686 "strip_size_kb": 0, 00:29:01.686 "state": "online", 00:29:01.686 "raid_level": "raid1", 00:29:01.686 "superblock": true, 00:29:01.686 "num_base_bdevs": 2, 00:29:01.686 "num_base_bdevs_discovered": 2, 00:29:01.686 "num_base_bdevs_operational": 2, 00:29:01.686 "base_bdevs_list": [ 00:29:01.686 { 00:29:01.686 "name": "spare", 00:29:01.686 "uuid": "83e7ad3a-8715-54f2-b168-94ea23f46d01", 00:29:01.686 "is_configured": true, 00:29:01.686 "data_offset": 2048, 00:29:01.686 "data_size": 63488 00:29:01.686 }, 00:29:01.686 { 00:29:01.686 "name": "BaseBdev2", 00:29:01.686 "uuid": "2e3b2390-4958-5bfb-bc7e-a17342fc51f8", 00:29:01.686 "is_configured": true, 00:29:01.686 "data_offset": 2048, 00:29:01.686 "data_size": 63488 00:29:01.686 } 00:29:01.686 ] 00:29:01.686 }' 00:29:01.686 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:01.686 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:01.686 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:01.686 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:01.686 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:01.686 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:01.686 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:01.686 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:01.686 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:01.686 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:01.686 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:01.686 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:01.686 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:01.686 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:01.686 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:01.686 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:01.686 15:58:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.686 15:58:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:01.686 15:58:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.946 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:01.946 "name": "raid_bdev1", 00:29:01.946 "uuid": "2c9a3774-a929-4fae-a849-8dbf2513652f", 00:29:01.946 "strip_size_kb": 0, 00:29:01.946 "state": "online", 00:29:01.946 "raid_level": "raid1", 00:29:01.946 "superblock": true, 00:29:01.946 "num_base_bdevs": 2, 00:29:01.946 "num_base_bdevs_discovered": 2, 00:29:01.946 "num_base_bdevs_operational": 2, 00:29:01.946 "base_bdevs_list": [ 00:29:01.946 { 00:29:01.946 "name": "spare", 00:29:01.946 "uuid": "83e7ad3a-8715-54f2-b168-94ea23f46d01", 00:29:01.946 "is_configured": true, 00:29:01.946 "data_offset": 2048, 00:29:01.946 "data_size": 63488 00:29:01.946 }, 00:29:01.946 { 00:29:01.946 "name": "BaseBdev2", 00:29:01.946 "uuid": "2e3b2390-4958-5bfb-bc7e-a17342fc51f8", 00:29:01.946 "is_configured": true, 00:29:01.946 "data_offset": 2048, 00:29:01.946 "data_size": 63488 00:29:01.946 } 00:29:01.946 ] 00:29:01.946 }' 00:29:01.946 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:01.946 15:58:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:02.205 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:02.205 15:58:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.205 15:58:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:02.205 [2024-11-05 15:58:34.381806] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:02.205 [2024-11-05 15:58:34.381839] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:02.205 [2024-11-05 15:58:34.381912] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:02.205 [2024-11-05 15:58:34.381968] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:02.205 [2024-11-05 15:58:34.381976] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:29:02.205 15:58:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.205 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:02.205 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:29:02.205 15:58:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.205 15:58:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:02.205 15:58:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.205 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:29:02.205 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:29:02.205 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:29:02.205 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:29:02.205 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:29:02.205 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:29:02.205 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:02.205 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:02.205 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:02.205 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:29:02.205 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:02.205 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:02.205 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:29:02.540 /dev/nbd0 00:29:02.540 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:02.540 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:02.540 15:58:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:29:02.540 15:58:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:29:02.540 15:58:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:02.540 15:58:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:02.540 15:58:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:29:02.540 15:58:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:29:02.540 15:58:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:02.540 15:58:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:02.540 15:58:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:02.540 1+0 records in 00:29:02.540 1+0 records out 00:29:02.540 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000167985 s, 24.4 MB/s 00:29:02.540 15:58:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:02.540 15:58:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:29:02.540 15:58:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:02.540 15:58:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:02.540 15:58:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:29:02.540 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:02.540 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:02.540 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:29:02.540 /dev/nbd1 00:29:02.540 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:02.540 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:02.540 15:58:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:29:02.540 15:58:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:29:02.540 15:58:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:02.540 15:58:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:02.540 15:58:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:29:02.540 15:58:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:29:02.540 15:58:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:02.540 15:58:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:02.540 15:58:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:02.540 1+0 records in 00:29:02.540 1+0 records out 00:29:02.540 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000630487 s, 6.5 MB/s 00:29:02.541 15:58:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:02.541 15:58:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:29:02.541 15:58:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:02.541 15:58:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:02.541 15:58:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:29:02.541 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:02.541 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:02.541 15:58:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:29:02.803 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:29:02.803 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:29:02.803 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:02.803 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:02.803 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:29:02.803 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:02.803 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:29:03.063 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:03.063 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:03.063 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:03.063 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:03.063 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:03.063 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:03.063 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:29:03.063 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:29:03.063 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:03.063 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:29:03.063 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:03.063 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:03.063 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:03.063 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:03.063 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:03.063 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:03.323 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:29:03.323 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:29:03.323 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:29:03.323 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:29:03.323 15:58:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.323 15:58:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:03.323 15:58:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.323 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:29:03.323 15:58:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.323 15:58:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:03.323 [2024-11-05 15:58:35.496055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:03.323 [2024-11-05 15:58:35.496115] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:03.323 [2024-11-05 15:58:35.496138] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:29:03.323 [2024-11-05 15:58:35.496148] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:03.323 [2024-11-05 15:58:35.498388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:03.323 [2024-11-05 15:58:35.498436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:03.323 [2024-11-05 15:58:35.498538] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:03.323 [2024-11-05 15:58:35.498584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:03.323 [2024-11-05 15:58:35.498713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:03.323 spare 00:29:03.323 15:58:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.323 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:29:03.323 15:58:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.323 15:58:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:03.323 [2024-11-05 15:58:35.598804] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:29:03.323 [2024-11-05 15:58:35.598867] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:03.323 [2024-11-05 15:58:35.599199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:29:03.323 [2024-11-05 15:58:35.599385] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:29:03.323 [2024-11-05 15:58:35.599395] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:29:03.323 [2024-11-05 15:58:35.599566] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:03.323 15:58:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.323 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:03.323 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:03.323 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:03.323 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:03.323 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:03.323 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:03.323 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:03.323 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:03.323 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:03.323 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:03.323 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:03.323 15:58:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.323 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:03.323 15:58:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:03.323 15:58:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.323 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:03.323 "name": "raid_bdev1", 00:29:03.323 "uuid": "2c9a3774-a929-4fae-a849-8dbf2513652f", 00:29:03.323 "strip_size_kb": 0, 00:29:03.323 "state": "online", 00:29:03.323 "raid_level": "raid1", 00:29:03.323 "superblock": true, 00:29:03.323 "num_base_bdevs": 2, 00:29:03.323 "num_base_bdevs_discovered": 2, 00:29:03.323 "num_base_bdevs_operational": 2, 00:29:03.323 "base_bdevs_list": [ 00:29:03.323 { 00:29:03.323 "name": "spare", 00:29:03.324 "uuid": "83e7ad3a-8715-54f2-b168-94ea23f46d01", 00:29:03.324 "is_configured": true, 00:29:03.324 "data_offset": 2048, 00:29:03.324 "data_size": 63488 00:29:03.324 }, 00:29:03.324 { 00:29:03.324 "name": "BaseBdev2", 00:29:03.324 "uuid": "2e3b2390-4958-5bfb-bc7e-a17342fc51f8", 00:29:03.324 "is_configured": true, 00:29:03.324 "data_offset": 2048, 00:29:03.324 "data_size": 63488 00:29:03.324 } 00:29:03.324 ] 00:29:03.324 }' 00:29:03.324 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:03.324 15:58:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:03.583 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:03.583 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:03.583 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:03.583 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:03.583 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:03.583 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:03.583 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:03.583 15:58:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.583 15:58:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:03.583 15:58:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.583 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:03.583 "name": "raid_bdev1", 00:29:03.583 "uuid": "2c9a3774-a929-4fae-a849-8dbf2513652f", 00:29:03.583 "strip_size_kb": 0, 00:29:03.583 "state": "online", 00:29:03.583 "raid_level": "raid1", 00:29:03.583 "superblock": true, 00:29:03.583 "num_base_bdevs": 2, 00:29:03.583 "num_base_bdevs_discovered": 2, 00:29:03.583 "num_base_bdevs_operational": 2, 00:29:03.583 "base_bdevs_list": [ 00:29:03.583 { 00:29:03.583 "name": "spare", 00:29:03.583 "uuid": "83e7ad3a-8715-54f2-b168-94ea23f46d01", 00:29:03.583 "is_configured": true, 00:29:03.583 "data_offset": 2048, 00:29:03.583 "data_size": 63488 00:29:03.583 }, 00:29:03.583 { 00:29:03.583 "name": "BaseBdev2", 00:29:03.583 "uuid": "2e3b2390-4958-5bfb-bc7e-a17342fc51f8", 00:29:03.583 "is_configured": true, 00:29:03.583 "data_offset": 2048, 00:29:03.583 "data_size": 63488 00:29:03.583 } 00:29:03.583 ] 00:29:03.583 }' 00:29:03.583 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:03.583 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:03.583 15:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:03.843 15:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:03.843 15:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:03.843 15:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:29:03.843 15:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.843 15:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:03.843 15:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.843 15:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:29:03.843 15:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:29:03.843 15:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.843 15:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:03.843 [2024-11-05 15:58:36.044225] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:03.843 15:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.843 15:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:03.843 15:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:03.843 15:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:03.843 15:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:03.843 15:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:03.843 15:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:03.843 15:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:03.843 15:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:03.843 15:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:03.843 15:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:03.843 15:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:03.843 15:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:03.843 15:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.843 15:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:03.843 15:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.843 15:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:03.843 "name": "raid_bdev1", 00:29:03.843 "uuid": "2c9a3774-a929-4fae-a849-8dbf2513652f", 00:29:03.843 "strip_size_kb": 0, 00:29:03.843 "state": "online", 00:29:03.843 "raid_level": "raid1", 00:29:03.843 "superblock": true, 00:29:03.843 "num_base_bdevs": 2, 00:29:03.843 "num_base_bdevs_discovered": 1, 00:29:03.843 "num_base_bdevs_operational": 1, 00:29:03.843 "base_bdevs_list": [ 00:29:03.843 { 00:29:03.843 "name": null, 00:29:03.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:03.843 "is_configured": false, 00:29:03.843 "data_offset": 0, 00:29:03.843 "data_size": 63488 00:29:03.843 }, 00:29:03.843 { 00:29:03.843 "name": "BaseBdev2", 00:29:03.843 "uuid": "2e3b2390-4958-5bfb-bc7e-a17342fc51f8", 00:29:03.843 "is_configured": true, 00:29:03.843 "data_offset": 2048, 00:29:03.843 "data_size": 63488 00:29:03.843 } 00:29:03.843 ] 00:29:03.843 }' 00:29:03.843 15:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:03.843 15:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:04.104 15:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:29:04.104 15:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.104 15:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:04.104 [2024-11-05 15:58:36.352321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:04.104 [2024-11-05 15:58:36.352495] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:29:04.104 [2024-11-05 15:58:36.352511] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:04.104 [2024-11-05 15:58:36.352547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:04.104 [2024-11-05 15:58:36.363133] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:29:04.104 15:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.104 15:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:29:04.104 [2024-11-05 15:58:36.365042] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:05.045 15:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:05.045 15:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:05.045 15:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:05.045 15:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:05.045 15:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:05.045 15:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:05.045 15:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:05.045 15:58:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.045 15:58:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:05.045 15:58:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.045 15:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:05.045 "name": "raid_bdev1", 00:29:05.045 "uuid": "2c9a3774-a929-4fae-a849-8dbf2513652f", 00:29:05.045 "strip_size_kb": 0, 00:29:05.045 "state": "online", 00:29:05.045 "raid_level": "raid1", 00:29:05.045 "superblock": true, 00:29:05.045 "num_base_bdevs": 2, 00:29:05.045 "num_base_bdevs_discovered": 2, 00:29:05.045 "num_base_bdevs_operational": 2, 00:29:05.045 "process": { 00:29:05.045 "type": "rebuild", 00:29:05.045 "target": "spare", 00:29:05.045 "progress": { 00:29:05.045 "blocks": 20480, 00:29:05.045 "percent": 32 00:29:05.045 } 00:29:05.045 }, 00:29:05.045 "base_bdevs_list": [ 00:29:05.045 { 00:29:05.045 "name": "spare", 00:29:05.045 "uuid": "83e7ad3a-8715-54f2-b168-94ea23f46d01", 00:29:05.045 "is_configured": true, 00:29:05.045 "data_offset": 2048, 00:29:05.045 "data_size": 63488 00:29:05.045 }, 00:29:05.045 { 00:29:05.045 "name": "BaseBdev2", 00:29:05.045 "uuid": "2e3b2390-4958-5bfb-bc7e-a17342fc51f8", 00:29:05.045 "is_configured": true, 00:29:05.045 "data_offset": 2048, 00:29:05.045 "data_size": 63488 00:29:05.045 } 00:29:05.045 ] 00:29:05.045 }' 00:29:05.045 15:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:05.045 15:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:05.045 15:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:05.045 15:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:05.046 15:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:29:05.046 15:58:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.046 15:58:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:05.307 [2024-11-05 15:58:37.463210] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:05.307 [2024-11-05 15:58:37.470575] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:05.307 [2024-11-05 15:58:37.470637] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:05.307 [2024-11-05 15:58:37.470652] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:05.307 [2024-11-05 15:58:37.470660] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:05.307 15:58:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.307 15:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:05.307 15:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:05.307 15:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:05.307 15:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:05.307 15:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:05.307 15:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:05.307 15:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:05.307 15:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:05.307 15:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:05.307 15:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:05.307 15:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:05.307 15:58:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.307 15:58:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:05.307 15:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:05.307 15:58:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.307 15:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:05.307 "name": "raid_bdev1", 00:29:05.307 "uuid": "2c9a3774-a929-4fae-a849-8dbf2513652f", 00:29:05.307 "strip_size_kb": 0, 00:29:05.307 "state": "online", 00:29:05.307 "raid_level": "raid1", 00:29:05.307 "superblock": true, 00:29:05.307 "num_base_bdevs": 2, 00:29:05.307 "num_base_bdevs_discovered": 1, 00:29:05.307 "num_base_bdevs_operational": 1, 00:29:05.307 "base_bdevs_list": [ 00:29:05.307 { 00:29:05.307 "name": null, 00:29:05.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:05.307 "is_configured": false, 00:29:05.307 "data_offset": 0, 00:29:05.307 "data_size": 63488 00:29:05.307 }, 00:29:05.307 { 00:29:05.307 "name": "BaseBdev2", 00:29:05.307 "uuid": "2e3b2390-4958-5bfb-bc7e-a17342fc51f8", 00:29:05.307 "is_configured": true, 00:29:05.307 "data_offset": 2048, 00:29:05.307 "data_size": 63488 00:29:05.307 } 00:29:05.307 ] 00:29:05.307 }' 00:29:05.307 15:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:05.307 15:58:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:05.567 15:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:29:05.567 15:58:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.567 15:58:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:05.567 [2024-11-05 15:58:37.813369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:05.567 [2024-11-05 15:58:37.813441] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:05.567 [2024-11-05 15:58:37.813462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:29:05.567 [2024-11-05 15:58:37.813474] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:05.567 [2024-11-05 15:58:37.813930] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:05.567 [2024-11-05 15:58:37.813956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:05.567 [2024-11-05 15:58:37.814045] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:05.567 [2024-11-05 15:58:37.814061] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:29:05.567 [2024-11-05 15:58:37.814072] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:05.568 [2024-11-05 15:58:37.814098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:05.568 [2024-11-05 15:58:37.824633] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:29:05.568 spare 00:29:05.568 15:58:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.568 15:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:29:05.568 [2024-11-05 15:58:37.826539] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:06.508 15:58:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:06.508 15:58:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:06.508 15:58:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:06.508 15:58:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:06.508 15:58:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:06.508 15:58:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:06.508 15:58:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.508 15:58:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:06.508 15:58:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:06.508 15:58:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.508 15:58:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:06.508 "name": "raid_bdev1", 00:29:06.508 "uuid": "2c9a3774-a929-4fae-a849-8dbf2513652f", 00:29:06.508 "strip_size_kb": 0, 00:29:06.508 "state": "online", 00:29:06.508 "raid_level": "raid1", 00:29:06.508 "superblock": true, 00:29:06.508 "num_base_bdevs": 2, 00:29:06.508 "num_base_bdevs_discovered": 2, 00:29:06.508 "num_base_bdevs_operational": 2, 00:29:06.508 "process": { 00:29:06.508 "type": "rebuild", 00:29:06.508 "target": "spare", 00:29:06.508 "progress": { 00:29:06.508 "blocks": 20480, 00:29:06.508 "percent": 32 00:29:06.508 } 00:29:06.508 }, 00:29:06.508 "base_bdevs_list": [ 00:29:06.508 { 00:29:06.508 "name": "spare", 00:29:06.508 "uuid": "83e7ad3a-8715-54f2-b168-94ea23f46d01", 00:29:06.508 "is_configured": true, 00:29:06.508 "data_offset": 2048, 00:29:06.508 "data_size": 63488 00:29:06.508 }, 00:29:06.508 { 00:29:06.508 "name": "BaseBdev2", 00:29:06.508 "uuid": "2e3b2390-4958-5bfb-bc7e-a17342fc51f8", 00:29:06.508 "is_configured": true, 00:29:06.508 "data_offset": 2048, 00:29:06.508 "data_size": 63488 00:29:06.508 } 00:29:06.508 ] 00:29:06.508 }' 00:29:06.508 15:58:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:06.508 15:58:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:06.508 15:58:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:06.768 15:58:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:06.768 15:58:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:29:06.769 15:58:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.769 15:58:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:06.769 [2024-11-05 15:58:38.932686] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:06.769 [2024-11-05 15:58:39.032870] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:06.769 [2024-11-05 15:58:39.032941] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:06.769 [2024-11-05 15:58:39.032959] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:06.769 [2024-11-05 15:58:39.032967] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:06.769 15:58:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.769 15:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:06.769 15:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:06.769 15:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:06.769 15:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:06.769 15:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:06.769 15:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:06.769 15:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:06.769 15:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:06.769 15:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:06.769 15:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:06.769 15:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:06.769 15:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:06.769 15:58:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.769 15:58:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:06.769 15:58:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.769 15:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:06.769 "name": "raid_bdev1", 00:29:06.769 "uuid": "2c9a3774-a929-4fae-a849-8dbf2513652f", 00:29:06.769 "strip_size_kb": 0, 00:29:06.769 "state": "online", 00:29:06.769 "raid_level": "raid1", 00:29:06.769 "superblock": true, 00:29:06.769 "num_base_bdevs": 2, 00:29:06.769 "num_base_bdevs_discovered": 1, 00:29:06.769 "num_base_bdevs_operational": 1, 00:29:06.769 "base_bdevs_list": [ 00:29:06.769 { 00:29:06.769 "name": null, 00:29:06.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:06.769 "is_configured": false, 00:29:06.769 "data_offset": 0, 00:29:06.769 "data_size": 63488 00:29:06.769 }, 00:29:06.769 { 00:29:06.769 "name": "BaseBdev2", 00:29:06.769 "uuid": "2e3b2390-4958-5bfb-bc7e-a17342fc51f8", 00:29:06.769 "is_configured": true, 00:29:06.769 "data_offset": 2048, 00:29:06.769 "data_size": 63488 00:29:06.769 } 00:29:06.769 ] 00:29:06.769 }' 00:29:06.769 15:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:06.769 15:58:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:07.029 15:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:07.029 15:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:07.029 15:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:07.029 15:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:07.029 15:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:07.029 15:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:07.029 15:58:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.029 15:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:07.029 15:58:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:07.029 15:58:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.029 15:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:07.029 "name": "raid_bdev1", 00:29:07.029 "uuid": "2c9a3774-a929-4fae-a849-8dbf2513652f", 00:29:07.029 "strip_size_kb": 0, 00:29:07.029 "state": "online", 00:29:07.029 "raid_level": "raid1", 00:29:07.029 "superblock": true, 00:29:07.029 "num_base_bdevs": 2, 00:29:07.029 "num_base_bdevs_discovered": 1, 00:29:07.029 "num_base_bdevs_operational": 1, 00:29:07.029 "base_bdevs_list": [ 00:29:07.029 { 00:29:07.029 "name": null, 00:29:07.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:07.029 "is_configured": false, 00:29:07.029 "data_offset": 0, 00:29:07.029 "data_size": 63488 00:29:07.029 }, 00:29:07.029 { 00:29:07.029 "name": "BaseBdev2", 00:29:07.029 "uuid": "2e3b2390-4958-5bfb-bc7e-a17342fc51f8", 00:29:07.029 "is_configured": true, 00:29:07.029 "data_offset": 2048, 00:29:07.029 "data_size": 63488 00:29:07.029 } 00:29:07.029 ] 00:29:07.029 }' 00:29:07.029 15:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:07.289 15:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:07.289 15:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:07.289 15:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:07.289 15:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:29:07.289 15:58:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.289 15:58:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:07.289 15:58:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.289 15:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:07.289 15:58:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.289 15:58:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:07.289 [2024-11-05 15:58:39.516013] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:07.289 [2024-11-05 15:58:39.516067] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:07.289 [2024-11-05 15:58:39.516090] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:29:07.289 [2024-11-05 15:58:39.516100] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:07.289 [2024-11-05 15:58:39.516527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:07.289 [2024-11-05 15:58:39.516557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:07.289 [2024-11-05 15:58:39.516633] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:29:07.289 [2024-11-05 15:58:39.516653] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:29:07.289 [2024-11-05 15:58:39.516663] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:07.289 [2024-11-05 15:58:39.516674] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:29:07.289 BaseBdev1 00:29:07.289 15:58:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.289 15:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:29:08.228 15:58:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:08.228 15:58:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:08.228 15:58:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:08.228 15:58:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:08.228 15:58:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:08.228 15:58:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:08.228 15:58:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:08.228 15:58:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:08.228 15:58:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:08.228 15:58:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:08.228 15:58:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:08.228 15:58:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.228 15:58:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:08.228 15:58:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:08.228 15:58:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.228 15:58:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:08.228 "name": "raid_bdev1", 00:29:08.228 "uuid": "2c9a3774-a929-4fae-a849-8dbf2513652f", 00:29:08.228 "strip_size_kb": 0, 00:29:08.228 "state": "online", 00:29:08.228 "raid_level": "raid1", 00:29:08.228 "superblock": true, 00:29:08.228 "num_base_bdevs": 2, 00:29:08.228 "num_base_bdevs_discovered": 1, 00:29:08.228 "num_base_bdevs_operational": 1, 00:29:08.228 "base_bdevs_list": [ 00:29:08.228 { 00:29:08.228 "name": null, 00:29:08.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:08.228 "is_configured": false, 00:29:08.228 "data_offset": 0, 00:29:08.228 "data_size": 63488 00:29:08.228 }, 00:29:08.228 { 00:29:08.228 "name": "BaseBdev2", 00:29:08.228 "uuid": "2e3b2390-4958-5bfb-bc7e-a17342fc51f8", 00:29:08.228 "is_configured": true, 00:29:08.228 "data_offset": 2048, 00:29:08.228 "data_size": 63488 00:29:08.228 } 00:29:08.228 ] 00:29:08.228 }' 00:29:08.228 15:58:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:08.228 15:58:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:08.489 15:58:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:08.489 15:58:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:08.489 15:58:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:08.489 15:58:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:08.489 15:58:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:08.489 15:58:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:08.489 15:58:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:08.489 15:58:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.489 15:58:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:08.489 15:58:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.489 15:58:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:08.489 "name": "raid_bdev1", 00:29:08.489 "uuid": "2c9a3774-a929-4fae-a849-8dbf2513652f", 00:29:08.489 "strip_size_kb": 0, 00:29:08.489 "state": "online", 00:29:08.489 "raid_level": "raid1", 00:29:08.489 "superblock": true, 00:29:08.489 "num_base_bdevs": 2, 00:29:08.489 "num_base_bdevs_discovered": 1, 00:29:08.489 "num_base_bdevs_operational": 1, 00:29:08.489 "base_bdevs_list": [ 00:29:08.489 { 00:29:08.489 "name": null, 00:29:08.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:08.489 "is_configured": false, 00:29:08.489 "data_offset": 0, 00:29:08.489 "data_size": 63488 00:29:08.489 }, 00:29:08.489 { 00:29:08.489 "name": "BaseBdev2", 00:29:08.489 "uuid": "2e3b2390-4958-5bfb-bc7e-a17342fc51f8", 00:29:08.489 "is_configured": true, 00:29:08.489 "data_offset": 2048, 00:29:08.489 "data_size": 63488 00:29:08.489 } 00:29:08.489 ] 00:29:08.489 }' 00:29:08.489 15:58:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:08.748 15:58:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:08.748 15:58:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:08.748 15:58:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:08.748 15:58:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:08.748 15:58:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:29:08.748 15:58:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:08.748 15:58:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:08.748 15:58:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:08.748 15:58:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:08.748 15:58:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:08.748 15:58:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:08.748 15:58:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.748 15:58:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:08.748 [2024-11-05 15:58:40.956417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:08.748 [2024-11-05 15:58:40.956570] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:29:08.748 [2024-11-05 15:58:40.956595] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:08.748 request: 00:29:08.748 { 00:29:08.748 "base_bdev": "BaseBdev1", 00:29:08.748 "raid_bdev": "raid_bdev1", 00:29:08.748 "method": "bdev_raid_add_base_bdev", 00:29:08.748 "req_id": 1 00:29:08.748 } 00:29:08.748 Got JSON-RPC error response 00:29:08.748 response: 00:29:08.748 { 00:29:08.748 "code": -22, 00:29:08.748 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:29:08.748 } 00:29:08.748 15:58:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:08.748 15:58:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:29:08.748 15:58:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:08.748 15:58:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:08.748 15:58:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:08.748 15:58:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:29:09.725 15:58:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:09.725 15:58:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:09.725 15:58:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:09.725 15:58:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:09.725 15:58:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:09.725 15:58:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:09.725 15:58:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:09.725 15:58:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:09.725 15:58:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:09.725 15:58:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:09.725 15:58:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:09.725 15:58:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:09.725 15:58:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.725 15:58:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:09.725 15:58:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.725 15:58:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:09.725 "name": "raid_bdev1", 00:29:09.725 "uuid": "2c9a3774-a929-4fae-a849-8dbf2513652f", 00:29:09.725 "strip_size_kb": 0, 00:29:09.725 "state": "online", 00:29:09.725 "raid_level": "raid1", 00:29:09.725 "superblock": true, 00:29:09.725 "num_base_bdevs": 2, 00:29:09.725 "num_base_bdevs_discovered": 1, 00:29:09.725 "num_base_bdevs_operational": 1, 00:29:09.725 "base_bdevs_list": [ 00:29:09.725 { 00:29:09.725 "name": null, 00:29:09.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:09.725 "is_configured": false, 00:29:09.725 "data_offset": 0, 00:29:09.725 "data_size": 63488 00:29:09.725 }, 00:29:09.725 { 00:29:09.725 "name": "BaseBdev2", 00:29:09.725 "uuid": "2e3b2390-4958-5bfb-bc7e-a17342fc51f8", 00:29:09.725 "is_configured": true, 00:29:09.725 "data_offset": 2048, 00:29:09.725 "data_size": 63488 00:29:09.725 } 00:29:09.725 ] 00:29:09.725 }' 00:29:09.725 15:58:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:09.725 15:58:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:09.986 15:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:09.986 15:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:09.986 15:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:09.986 15:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:09.986 15:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:09.986 15:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:09.986 15:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:09.986 15:58:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.986 15:58:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:09.986 15:58:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.986 15:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:09.986 "name": "raid_bdev1", 00:29:09.986 "uuid": "2c9a3774-a929-4fae-a849-8dbf2513652f", 00:29:09.986 "strip_size_kb": 0, 00:29:09.986 "state": "online", 00:29:09.986 "raid_level": "raid1", 00:29:09.986 "superblock": true, 00:29:09.986 "num_base_bdevs": 2, 00:29:09.986 "num_base_bdevs_discovered": 1, 00:29:09.986 "num_base_bdevs_operational": 1, 00:29:09.986 "base_bdevs_list": [ 00:29:09.986 { 00:29:09.986 "name": null, 00:29:09.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:09.986 "is_configured": false, 00:29:09.986 "data_offset": 0, 00:29:09.986 "data_size": 63488 00:29:09.986 }, 00:29:09.986 { 00:29:09.986 "name": "BaseBdev2", 00:29:09.986 "uuid": "2e3b2390-4958-5bfb-bc7e-a17342fc51f8", 00:29:09.986 "is_configured": true, 00:29:09.986 "data_offset": 2048, 00:29:09.986 "data_size": 63488 00:29:09.986 } 00:29:09.986 ] 00:29:09.986 }' 00:29:09.986 15:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:09.986 15:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:09.986 15:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:09.986 15:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:09.986 15:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 73435 00:29:09.986 15:58:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 73435 ']' 00:29:09.986 15:58:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 73435 00:29:09.986 15:58:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:29:09.986 15:58:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:09.986 15:58:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73435 00:29:09.986 15:58:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:09.986 15:58:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:09.986 killing process with pid 73435 00:29:09.986 15:58:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73435' 00:29:09.986 15:58:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 73435 00:29:09.986 Received shutdown signal, test time was about 60.000000 seconds 00:29:09.986 00:29:09.986 Latency(us) 00:29:09.986 [2024-11-05T15:58:42.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.986 [2024-11-05T15:58:42.401Z] =================================================================================================================== 00:29:09.986 [2024-11-05T15:58:42.401Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:09.986 15:58:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 73435 00:29:09.986 [2024-11-05 15:58:42.388677] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:09.986 [2024-11-05 15:58:42.388809] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:09.986 [2024-11-05 15:58:42.388881] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:09.986 [2024-11-05 15:58:42.388895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:29:10.246 [2024-11-05 15:58:42.584186] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:11.189 ************************************ 00:29:11.189 END TEST raid_rebuild_test_sb 00:29:11.189 ************************************ 00:29:11.189 15:58:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:29:11.189 00:29:11.189 real 0m20.784s 00:29:11.189 user 0m24.332s 00:29:11.189 sys 0m3.077s 00:29:11.189 15:58:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:11.189 15:58:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:11.189 15:58:43 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:29:11.189 15:58:43 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:29:11.189 15:58:43 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:11.189 15:58:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:11.189 ************************************ 00:29:11.189 START TEST raid_rebuild_test_io 00:29:11.189 ************************************ 00:29:11.189 15:58:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false true true 00:29:11.189 15:58:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:29:11.189 15:58:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:29:11.189 15:58:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:29:11.189 15:58:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:29:11.189 15:58:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:29:11.189 15:58:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:29:11.189 15:58:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:11.189 15:58:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:29:11.189 15:58:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:11.189 15:58:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:11.189 15:58:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:29:11.189 15:58:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:11.189 15:58:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:11.189 15:58:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:29:11.189 15:58:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:29:11.189 15:58:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:29:11.189 15:58:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:29:11.189 15:58:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:29:11.189 15:58:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:29:11.189 15:58:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:29:11.189 15:58:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:29:11.189 15:58:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:29:11.189 15:58:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:29:11.189 15:58:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=74141 00:29:11.189 15:58:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 74141 00:29:11.189 15:58:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 74141 ']' 00:29:11.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:11.189 15:58:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:11.189 15:58:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:11.189 15:58:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:11.189 15:58:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:11.189 15:58:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:11.189 15:58:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:11.189 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:11.189 Zero copy mechanism will not be used. 00:29:11.189 [2024-11-05 15:58:43.481685] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:29:11.189 [2024-11-05 15:58:43.481865] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74141 ] 00:29:11.450 [2024-11-05 15:58:43.645673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.450 [2024-11-05 15:58:43.782922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.710 [2024-11-05 15:58:43.947720] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:11.710 [2024-11-05 15:58:43.947811] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:11.971 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:11.971 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:29:11.971 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:11.971 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:11.971 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.971 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:11.971 BaseBdev1_malloc 00:29:11.971 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.971 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:11.971 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.971 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:12.238 [2024-11-05 15:58:44.387537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:12.238 [2024-11-05 15:58:44.387638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:12.238 [2024-11-05 15:58:44.387677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:29:12.238 [2024-11-05 15:58:44.387695] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:12.238 [2024-11-05 15:58:44.390211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:12.238 [2024-11-05 15:58:44.390274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:12.238 BaseBdev1 00:29:12.238 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.238 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:12.238 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:12.238 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.238 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:12.238 BaseBdev2_malloc 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:12.239 [2024-11-05 15:58:44.428565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:12.239 [2024-11-05 15:58:44.428657] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:12.239 [2024-11-05 15:58:44.428688] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:29:12.239 [2024-11-05 15:58:44.428703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:12.239 [2024-11-05 15:58:44.431164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:12.239 [2024-11-05 15:58:44.431223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:12.239 BaseBdev2 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:12.239 spare_malloc 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:12.239 spare_delay 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:12.239 [2024-11-05 15:58:44.492411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:12.239 [2024-11-05 15:58:44.492500] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:12.239 [2024-11-05 15:58:44.492529] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:29:12.239 [2024-11-05 15:58:44.492545] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:12.239 [2024-11-05 15:58:44.495081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:12.239 [2024-11-05 15:58:44.495142] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:12.239 spare 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:12.239 [2024-11-05 15:58:44.500479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:12.239 [2024-11-05 15:58:44.502690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:12.239 [2024-11-05 15:58:44.502871] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:29:12.239 [2024-11-05 15:58:44.502899] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:29:12.239 [2024-11-05 15:58:44.503229] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:29:12.239 [2024-11-05 15:58:44.503466] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:29:12.239 [2024-11-05 15:58:44.503492] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:29:12.239 [2024-11-05 15:58:44.503729] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:12.239 "name": "raid_bdev1", 00:29:12.239 "uuid": "a1a9bd84-4b1e-425c-ab51-c6ddaf1892cd", 00:29:12.239 "strip_size_kb": 0, 00:29:12.239 "state": "online", 00:29:12.239 "raid_level": "raid1", 00:29:12.239 "superblock": false, 00:29:12.239 "num_base_bdevs": 2, 00:29:12.239 "num_base_bdevs_discovered": 2, 00:29:12.239 "num_base_bdevs_operational": 2, 00:29:12.239 "base_bdevs_list": [ 00:29:12.239 { 00:29:12.239 "name": "BaseBdev1", 00:29:12.239 "uuid": "97e2df37-e916-5a70-bfce-9c0ed7f9e85c", 00:29:12.239 "is_configured": true, 00:29:12.239 "data_offset": 0, 00:29:12.239 "data_size": 65536 00:29:12.239 }, 00:29:12.239 { 00:29:12.239 "name": "BaseBdev2", 00:29:12.239 "uuid": "08bdb887-8c4c-5b2b-a66f-e643a675feef", 00:29:12.239 "is_configured": true, 00:29:12.239 "data_offset": 0, 00:29:12.239 "data_size": 65536 00:29:12.239 } 00:29:12.239 ] 00:29:12.239 }' 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:12.239 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:12.518 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:12.518 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.518 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:12.518 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:29:12.518 [2024-11-05 15:58:44.832908] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:12.518 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.518 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:29:12.518 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:12.518 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.518 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:12.518 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:12.518 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.518 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:29:12.518 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:29:12.518 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:29:12.518 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.518 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:12.518 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:12.518 [2024-11-05 15:58:44.900561] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:12.518 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.518 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:12.518 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:12.518 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:12.518 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:12.518 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:12.518 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:12.518 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:12.518 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:12.519 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:12.519 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:12.519 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:12.519 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:12.519 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.519 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:12.519 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.780 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:12.780 "name": "raid_bdev1", 00:29:12.780 "uuid": "a1a9bd84-4b1e-425c-ab51-c6ddaf1892cd", 00:29:12.780 "strip_size_kb": 0, 00:29:12.780 "state": "online", 00:29:12.780 "raid_level": "raid1", 00:29:12.780 "superblock": false, 00:29:12.780 "num_base_bdevs": 2, 00:29:12.780 "num_base_bdevs_discovered": 1, 00:29:12.780 "num_base_bdevs_operational": 1, 00:29:12.780 "base_bdevs_list": [ 00:29:12.780 { 00:29:12.780 "name": null, 00:29:12.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:12.780 "is_configured": false, 00:29:12.780 "data_offset": 0, 00:29:12.780 "data_size": 65536 00:29:12.780 }, 00:29:12.780 { 00:29:12.780 "name": "BaseBdev2", 00:29:12.780 "uuid": "08bdb887-8c4c-5b2b-a66f-e643a675feef", 00:29:12.780 "is_configured": true, 00:29:12.780 "data_offset": 0, 00:29:12.780 "data_size": 65536 00:29:12.780 } 00:29:12.780 ] 00:29:12.780 }' 00:29:12.780 15:58:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:12.780 15:58:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:12.780 [2024-11-05 15:58:44.995294] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:29:12.780 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:12.780 Zero copy mechanism will not be used. 00:29:12.780 Running I/O for 60 seconds... 00:29:13.041 15:58:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:29:13.041 15:58:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.041 15:58:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:13.041 [2024-11-05 15:58:45.266308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:13.041 15:58:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.041 15:58:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:29:13.041 [2024-11-05 15:58:45.315423] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:29:13.041 [2024-11-05 15:58:45.317774] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:13.041 [2024-11-05 15:58:45.447511] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:13.041 [2024-11-05 15:58:45.448354] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:13.303 [2024-11-05 15:58:45.660833] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:13.303 [2024-11-05 15:58:45.661373] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:13.563 [2024-11-05 15:58:45.886428] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:13.564 [2024-11-05 15:58:45.887244] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:13.824 158.00 IOPS, 474.00 MiB/s [2024-11-05T15:58:46.239Z] [2024-11-05 15:58:46.106757] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:14.085 15:58:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:14.085 15:58:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:14.085 15:58:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:14.085 15:58:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:14.085 15:58:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:14.085 15:58:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:14.085 15:58:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:14.085 15:58:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.085 15:58:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:14.085 15:58:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.085 15:58:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:14.085 "name": "raid_bdev1", 00:29:14.085 "uuid": "a1a9bd84-4b1e-425c-ab51-c6ddaf1892cd", 00:29:14.085 "strip_size_kb": 0, 00:29:14.085 "state": "online", 00:29:14.085 "raid_level": "raid1", 00:29:14.085 "superblock": false, 00:29:14.085 "num_base_bdevs": 2, 00:29:14.085 "num_base_bdevs_discovered": 2, 00:29:14.085 "num_base_bdevs_operational": 2, 00:29:14.085 "process": { 00:29:14.085 "type": "rebuild", 00:29:14.085 "target": "spare", 00:29:14.085 "progress": { 00:29:14.085 "blocks": 12288, 00:29:14.085 "percent": 18 00:29:14.085 } 00:29:14.085 }, 00:29:14.085 "base_bdevs_list": [ 00:29:14.085 { 00:29:14.085 "name": "spare", 00:29:14.085 "uuid": "cfd5b327-570a-5067-81eb-934106452cd9", 00:29:14.085 "is_configured": true, 00:29:14.085 "data_offset": 0, 00:29:14.085 "data_size": 65536 00:29:14.085 }, 00:29:14.085 { 00:29:14.085 "name": "BaseBdev2", 00:29:14.085 "uuid": "08bdb887-8c4c-5b2b-a66f-e643a675feef", 00:29:14.085 "is_configured": true, 00:29:14.085 "data_offset": 0, 00:29:14.085 "data_size": 65536 00:29:14.085 } 00:29:14.085 ] 00:29:14.085 }' 00:29:14.085 15:58:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:14.085 15:58:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:14.085 15:58:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:14.085 15:58:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:14.085 15:58:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:29:14.085 15:58:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.085 15:58:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:14.085 [2024-11-05 15:58:46.421372] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:14.085 [2024-11-05 15:58:46.445670] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:14.347 [2024-11-05 15:58:46.545317] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:14.347 [2024-11-05 15:58:46.555827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:14.347 [2024-11-05 15:58:46.555901] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:14.347 [2024-11-05 15:58:46.555918] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:14.347 [2024-11-05 15:58:46.598876] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:29:14.347 15:58:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.347 15:58:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:14.347 15:58:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:14.347 15:58:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:14.347 15:58:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:14.347 15:58:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:14.347 15:58:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:14.347 15:58:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:14.347 15:58:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:14.347 15:58:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:14.347 15:58:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:14.347 15:58:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:14.347 15:58:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.347 15:58:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:14.347 15:58:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:14.347 15:58:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.347 15:58:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:14.347 "name": "raid_bdev1", 00:29:14.347 "uuid": "a1a9bd84-4b1e-425c-ab51-c6ddaf1892cd", 00:29:14.347 "strip_size_kb": 0, 00:29:14.347 "state": "online", 00:29:14.347 "raid_level": "raid1", 00:29:14.347 "superblock": false, 00:29:14.347 "num_base_bdevs": 2, 00:29:14.347 "num_base_bdevs_discovered": 1, 00:29:14.347 "num_base_bdevs_operational": 1, 00:29:14.347 "base_bdevs_list": [ 00:29:14.347 { 00:29:14.347 "name": null, 00:29:14.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:14.347 "is_configured": false, 00:29:14.347 "data_offset": 0, 00:29:14.347 "data_size": 65536 00:29:14.347 }, 00:29:14.347 { 00:29:14.347 "name": "BaseBdev2", 00:29:14.347 "uuid": "08bdb887-8c4c-5b2b-a66f-e643a675feef", 00:29:14.347 "is_configured": true, 00:29:14.347 "data_offset": 0, 00:29:14.347 "data_size": 65536 00:29:14.347 } 00:29:14.347 ] 00:29:14.347 }' 00:29:14.347 15:58:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:14.347 15:58:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:14.608 15:58:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:14.608 15:58:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:14.608 15:58:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:14.608 15:58:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:14.608 15:58:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:14.608 15:58:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:14.608 15:58:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:14.608 15:58:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.608 15:58:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:14.608 15:58:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.608 15:58:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:14.608 "name": "raid_bdev1", 00:29:14.608 "uuid": "a1a9bd84-4b1e-425c-ab51-c6ddaf1892cd", 00:29:14.608 "strip_size_kb": 0, 00:29:14.608 "state": "online", 00:29:14.608 "raid_level": "raid1", 00:29:14.608 "superblock": false, 00:29:14.608 "num_base_bdevs": 2, 00:29:14.608 "num_base_bdevs_discovered": 1, 00:29:14.608 "num_base_bdevs_operational": 1, 00:29:14.608 "base_bdevs_list": [ 00:29:14.608 { 00:29:14.608 "name": null, 00:29:14.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:14.608 "is_configured": false, 00:29:14.608 "data_offset": 0, 00:29:14.608 "data_size": 65536 00:29:14.608 }, 00:29:14.608 { 00:29:14.608 "name": "BaseBdev2", 00:29:14.608 "uuid": "08bdb887-8c4c-5b2b-a66f-e643a675feef", 00:29:14.608 "is_configured": true, 00:29:14.608 "data_offset": 0, 00:29:14.608 "data_size": 65536 00:29:14.608 } 00:29:14.608 ] 00:29:14.608 }' 00:29:14.608 15:58:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:14.608 15:58:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:14.608 15:58:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:14.608 148.50 IOPS, 445.50 MiB/s [2024-11-05T15:58:47.023Z] 15:58:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:14.608 15:58:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:29:14.608 15:58:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.608 15:58:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:14.869 [2024-11-05 15:58:47.026747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:14.869 15:58:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.869 15:58:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:29:14.869 [2024-11-05 15:58:47.074067] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:29:14.869 [2024-11-05 15:58:47.076205] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:14.869 [2024-11-05 15:58:47.198869] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:14.869 [2024-11-05 15:58:47.199558] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:15.167 [2024-11-05 15:58:47.424755] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:15.167 [2024-11-05 15:58:47.425236] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:15.427 [2024-11-05 15:58:47.678596] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:15.427 [2024-11-05 15:58:47.679444] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:15.427 [2024-11-05 15:58:47.812902] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:15.689 139.67 IOPS, 419.00 MiB/s [2024-11-05T15:58:48.104Z] 15:58:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:15.689 15:58:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:15.689 15:58:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:15.689 15:58:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:15.689 15:58:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:15.689 15:58:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:15.689 15:58:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:15.689 15:58:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.689 15:58:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:15.689 15:58:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.949 15:58:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:15.949 "name": "raid_bdev1", 00:29:15.949 "uuid": "a1a9bd84-4b1e-425c-ab51-c6ddaf1892cd", 00:29:15.949 "strip_size_kb": 0, 00:29:15.949 "state": "online", 00:29:15.949 "raid_level": "raid1", 00:29:15.949 "superblock": false, 00:29:15.949 "num_base_bdevs": 2, 00:29:15.949 "num_base_bdevs_discovered": 2, 00:29:15.949 "num_base_bdevs_operational": 2, 00:29:15.949 "process": { 00:29:15.949 "type": "rebuild", 00:29:15.949 "target": "spare", 00:29:15.949 "progress": { 00:29:15.949 "blocks": 12288, 00:29:15.949 "percent": 18 00:29:15.949 } 00:29:15.949 }, 00:29:15.949 "base_bdevs_list": [ 00:29:15.949 { 00:29:15.949 "name": "spare", 00:29:15.949 "uuid": "cfd5b327-570a-5067-81eb-934106452cd9", 00:29:15.949 "is_configured": true, 00:29:15.949 "data_offset": 0, 00:29:15.949 "data_size": 65536 00:29:15.949 }, 00:29:15.949 { 00:29:15.949 "name": "BaseBdev2", 00:29:15.949 "uuid": "08bdb887-8c4c-5b2b-a66f-e643a675feef", 00:29:15.949 "is_configured": true, 00:29:15.949 "data_offset": 0, 00:29:15.949 "data_size": 65536 00:29:15.949 } 00:29:15.949 ] 00:29:15.949 }' 00:29:15.949 15:58:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:15.949 15:58:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:15.949 15:58:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:15.949 15:58:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:15.950 15:58:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:29:15.950 15:58:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:29:15.950 15:58:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:29:15.950 15:58:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:29:15.950 15:58:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=304 00:29:15.950 15:58:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:15.950 15:58:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:15.950 15:58:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:15.950 15:58:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:15.950 15:58:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:15.950 15:58:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:15.950 15:58:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:15.950 15:58:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:15.950 15:58:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.950 15:58:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:15.950 15:58:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.950 [2024-11-05 15:58:48.187571] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:15.950 [2024-11-05 15:58:48.188075] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:15.950 15:58:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:15.950 "name": "raid_bdev1", 00:29:15.950 "uuid": "a1a9bd84-4b1e-425c-ab51-c6ddaf1892cd", 00:29:15.950 "strip_size_kb": 0, 00:29:15.950 "state": "online", 00:29:15.950 "raid_level": "raid1", 00:29:15.950 "superblock": false, 00:29:15.950 "num_base_bdevs": 2, 00:29:15.950 "num_base_bdevs_discovered": 2, 00:29:15.950 "num_base_bdevs_operational": 2, 00:29:15.950 "process": { 00:29:15.950 "type": "rebuild", 00:29:15.950 "target": "spare", 00:29:15.950 "progress": { 00:29:15.950 "blocks": 12288, 00:29:15.950 "percent": 18 00:29:15.950 } 00:29:15.950 }, 00:29:15.950 "base_bdevs_list": [ 00:29:15.950 { 00:29:15.950 "name": "spare", 00:29:15.950 "uuid": "cfd5b327-570a-5067-81eb-934106452cd9", 00:29:15.950 "is_configured": true, 00:29:15.950 "data_offset": 0, 00:29:15.950 "data_size": 65536 00:29:15.950 }, 00:29:15.950 { 00:29:15.950 "name": "BaseBdev2", 00:29:15.950 "uuid": "08bdb887-8c4c-5b2b-a66f-e643a675feef", 00:29:15.950 "is_configured": true, 00:29:15.950 "data_offset": 0, 00:29:15.950 "data_size": 65536 00:29:15.950 } 00:29:15.950 ] 00:29:15.950 }' 00:29:15.950 15:58:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:15.950 15:58:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:15.950 15:58:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:15.950 15:58:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:15.950 15:58:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:15.950 [2024-11-05 15:58:48.312320] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:29:16.521 [2024-11-05 15:58:48.632227] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:29:16.781 124.75 IOPS, 374.25 MiB/s [2024-11-05T15:58:49.196Z] [2024-11-05 15:58:49.193584] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:29:17.043 15:58:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:17.043 15:58:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:17.043 15:58:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:17.043 15:58:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:17.043 15:58:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:17.043 15:58:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:17.043 15:58:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:17.043 15:58:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:17.043 15:58:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.043 15:58:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:17.043 15:58:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.043 15:58:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:17.043 "name": "raid_bdev1", 00:29:17.043 "uuid": "a1a9bd84-4b1e-425c-ab51-c6ddaf1892cd", 00:29:17.043 "strip_size_kb": 0, 00:29:17.043 "state": "online", 00:29:17.043 "raid_level": "raid1", 00:29:17.043 "superblock": false, 00:29:17.043 "num_base_bdevs": 2, 00:29:17.043 "num_base_bdevs_discovered": 2, 00:29:17.043 "num_base_bdevs_operational": 2, 00:29:17.043 "process": { 00:29:17.043 "type": "rebuild", 00:29:17.043 "target": "spare", 00:29:17.043 "progress": { 00:29:17.043 "blocks": 28672, 00:29:17.043 "percent": 43 00:29:17.043 } 00:29:17.043 }, 00:29:17.043 "base_bdevs_list": [ 00:29:17.043 { 00:29:17.043 "name": "spare", 00:29:17.043 "uuid": "cfd5b327-570a-5067-81eb-934106452cd9", 00:29:17.043 "is_configured": true, 00:29:17.043 "data_offset": 0, 00:29:17.043 "data_size": 65536 00:29:17.043 }, 00:29:17.043 { 00:29:17.043 "name": "BaseBdev2", 00:29:17.043 "uuid": "08bdb887-8c4c-5b2b-a66f-e643a675feef", 00:29:17.043 "is_configured": true, 00:29:17.043 "data_offset": 0, 00:29:17.043 "data_size": 65536 00:29:17.043 } 00:29:17.043 ] 00:29:17.043 }' 00:29:17.043 15:58:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:17.043 15:58:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:17.043 15:58:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:17.043 15:58:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:17.043 15:58:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:17.911 106.80 IOPS, 320.40 MiB/s [2024-11-05T15:58:50.326Z] [2024-11-05 15:58:50.115451] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:29:18.172 15:58:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:18.172 15:58:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:18.172 15:58:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:18.172 15:58:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:18.172 15:58:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:18.172 15:58:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:18.172 15:58:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:18.172 15:58:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:18.172 15:58:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.172 15:58:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:18.172 15:58:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.172 15:58:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:18.172 "name": "raid_bdev1", 00:29:18.172 "uuid": "a1a9bd84-4b1e-425c-ab51-c6ddaf1892cd", 00:29:18.172 "strip_size_kb": 0, 00:29:18.172 "state": "online", 00:29:18.172 "raid_level": "raid1", 00:29:18.172 "superblock": false, 00:29:18.172 "num_base_bdevs": 2, 00:29:18.172 "num_base_bdevs_discovered": 2, 00:29:18.172 "num_base_bdevs_operational": 2, 00:29:18.172 "process": { 00:29:18.172 "type": "rebuild", 00:29:18.172 "target": "spare", 00:29:18.172 "progress": { 00:29:18.172 "blocks": 47104, 00:29:18.172 "percent": 71 00:29:18.172 } 00:29:18.172 }, 00:29:18.172 "base_bdevs_list": [ 00:29:18.172 { 00:29:18.172 "name": "spare", 00:29:18.172 "uuid": "cfd5b327-570a-5067-81eb-934106452cd9", 00:29:18.172 "is_configured": true, 00:29:18.172 "data_offset": 0, 00:29:18.172 "data_size": 65536 00:29:18.172 }, 00:29:18.172 { 00:29:18.172 "name": "BaseBdev2", 00:29:18.172 "uuid": "08bdb887-8c4c-5b2b-a66f-e643a675feef", 00:29:18.172 "is_configured": true, 00:29:18.172 "data_offset": 0, 00:29:18.172 "data_size": 65536 00:29:18.172 } 00:29:18.172 ] 00:29:18.172 }' 00:29:18.172 15:58:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:18.172 15:58:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:18.172 15:58:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:18.172 15:58:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:18.172 15:58:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:18.433 [2024-11-05 15:58:50.647787] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:29:18.694 [2024-11-05 15:58:50.966959] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:29:18.956 94.67 IOPS, 284.00 MiB/s [2024-11-05T15:58:51.371Z] [2024-11-05 15:58:51.284300] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:19.217 [2024-11-05 15:58:51.384316] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:19.217 [2024-11-05 15:58:51.385745] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:19.217 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:19.217 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:19.217 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:19.217 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:19.217 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:19.217 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:19.217 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:19.217 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:19.217 15:58:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.217 15:58:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:19.217 15:58:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.217 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:19.217 "name": "raid_bdev1", 00:29:19.217 "uuid": "a1a9bd84-4b1e-425c-ab51-c6ddaf1892cd", 00:29:19.217 "strip_size_kb": 0, 00:29:19.217 "state": "online", 00:29:19.217 "raid_level": "raid1", 00:29:19.217 "superblock": false, 00:29:19.217 "num_base_bdevs": 2, 00:29:19.217 "num_base_bdevs_discovered": 2, 00:29:19.217 "num_base_bdevs_operational": 2, 00:29:19.217 "base_bdevs_list": [ 00:29:19.217 { 00:29:19.217 "name": "spare", 00:29:19.217 "uuid": "cfd5b327-570a-5067-81eb-934106452cd9", 00:29:19.217 "is_configured": true, 00:29:19.217 "data_offset": 0, 00:29:19.217 "data_size": 65536 00:29:19.217 }, 00:29:19.217 { 00:29:19.217 "name": "BaseBdev2", 00:29:19.217 "uuid": "08bdb887-8c4c-5b2b-a66f-e643a675feef", 00:29:19.217 "is_configured": true, 00:29:19.217 "data_offset": 0, 00:29:19.217 "data_size": 65536 00:29:19.217 } 00:29:19.217 ] 00:29:19.217 }' 00:29:19.217 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:19.217 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:19.217 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:19.217 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:29:19.217 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:29:19.217 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:19.217 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:19.217 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:19.217 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:19.217 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:19.217 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:19.217 15:58:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.217 15:58:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:19.217 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:19.217 15:58:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.217 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:19.217 "name": "raid_bdev1", 00:29:19.217 "uuid": "a1a9bd84-4b1e-425c-ab51-c6ddaf1892cd", 00:29:19.217 "strip_size_kb": 0, 00:29:19.217 "state": "online", 00:29:19.217 "raid_level": "raid1", 00:29:19.217 "superblock": false, 00:29:19.217 "num_base_bdevs": 2, 00:29:19.217 "num_base_bdevs_discovered": 2, 00:29:19.217 "num_base_bdevs_operational": 2, 00:29:19.217 "base_bdevs_list": [ 00:29:19.217 { 00:29:19.217 "name": "spare", 00:29:19.217 "uuid": "cfd5b327-570a-5067-81eb-934106452cd9", 00:29:19.217 "is_configured": true, 00:29:19.217 "data_offset": 0, 00:29:19.217 "data_size": 65536 00:29:19.217 }, 00:29:19.217 { 00:29:19.217 "name": "BaseBdev2", 00:29:19.217 "uuid": "08bdb887-8c4c-5b2b-a66f-e643a675feef", 00:29:19.217 "is_configured": true, 00:29:19.217 "data_offset": 0, 00:29:19.217 "data_size": 65536 00:29:19.217 } 00:29:19.217 ] 00:29:19.217 }' 00:29:19.217 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:19.217 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:19.479 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:19.479 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:19.479 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:19.479 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:19.479 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:19.479 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:19.479 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:19.479 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:19.479 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:19.479 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:19.479 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:19.479 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:19.479 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:19.479 15:58:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.480 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:19.480 15:58:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:19.480 15:58:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.480 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:19.480 "name": "raid_bdev1", 00:29:19.480 "uuid": "a1a9bd84-4b1e-425c-ab51-c6ddaf1892cd", 00:29:19.480 "strip_size_kb": 0, 00:29:19.480 "state": "online", 00:29:19.480 "raid_level": "raid1", 00:29:19.480 "superblock": false, 00:29:19.480 "num_base_bdevs": 2, 00:29:19.480 "num_base_bdevs_discovered": 2, 00:29:19.480 "num_base_bdevs_operational": 2, 00:29:19.480 "base_bdevs_list": [ 00:29:19.480 { 00:29:19.480 "name": "spare", 00:29:19.480 "uuid": "cfd5b327-570a-5067-81eb-934106452cd9", 00:29:19.480 "is_configured": true, 00:29:19.480 "data_offset": 0, 00:29:19.480 "data_size": 65536 00:29:19.480 }, 00:29:19.480 { 00:29:19.480 "name": "BaseBdev2", 00:29:19.480 "uuid": "08bdb887-8c4c-5b2b-a66f-e643a675feef", 00:29:19.480 "is_configured": true, 00:29:19.480 "data_offset": 0, 00:29:19.480 "data_size": 65536 00:29:19.480 } 00:29:19.480 ] 00:29:19.480 }' 00:29:19.480 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:19.480 15:58:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:19.741 15:58:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:19.741 15:58:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.741 15:58:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:19.741 [2024-11-05 15:58:51.973985] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:19.741 [2024-11-05 15:58:51.974012] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:19.741 85.00 IOPS, 255.00 MiB/s 00:29:19.741 Latency(us) 00:29:19.741 [2024-11-05T15:58:52.156Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.741 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:29:19.741 raid_bdev1 : 7.02 84.99 254.97 0.00 0.00 15871.06 289.87 112116.97 00:29:19.741 [2024-11-05T15:58:52.156Z] =================================================================================================================== 00:29:19.741 [2024-11-05T15:58:52.156Z] Total : 84.99 254.97 0.00 0.00 15871.06 289.87 112116.97 00:29:19.741 [2024-11-05 15:58:52.033211] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:19.741 [2024-11-05 15:58:52.033246] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:19.741 [2024-11-05 15:58:52.033308] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:19.741 [2024-11-05 15:58:52.033317] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:29:19.741 { 00:29:19.741 "results": [ 00:29:19.741 { 00:29:19.741 "job": "raid_bdev1", 00:29:19.741 "core_mask": "0x1", 00:29:19.741 "workload": "randrw", 00:29:19.741 "percentage": 50, 00:29:19.741 "status": "finished", 00:29:19.741 "queue_depth": 2, 00:29:19.741 "io_size": 3145728, 00:29:19.741 "runtime": 7.024247, 00:29:19.741 "iops": 84.99131650695085, 00:29:19.741 "mibps": 254.97394952085256, 00:29:19.741 "io_failed": 0, 00:29:19.741 "io_timeout": 0, 00:29:19.741 "avg_latency_us": 15871.059255250613, 00:29:19.741 "min_latency_us": 289.8707692307692, 00:29:19.741 "max_latency_us": 112116.97230769231 00:29:19.741 } 00:29:19.741 ], 00:29:19.741 "core_count": 1 00:29:19.741 } 00:29:19.741 15:58:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.741 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:19.741 15:58:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.741 15:58:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:19.741 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:29:19.742 15:58:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.742 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:29:19.742 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:29:19.742 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:29:19.742 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:29:19.742 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:29:19.742 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:29:19.742 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:19.742 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:19.742 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:19.742 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:29:19.742 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:19.742 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:19.742 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:29:20.002 /dev/nbd0 00:29:20.003 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:20.003 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:20.003 15:58:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:29:20.003 15:58:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:29:20.003 15:58:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:20.003 15:58:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:20.003 15:58:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:29:20.003 15:58:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:29:20.003 15:58:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:20.003 15:58:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:20.003 15:58:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:20.003 1+0 records in 00:29:20.003 1+0 records out 00:29:20.003 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235134 s, 17.4 MB/s 00:29:20.003 15:58:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:20.003 15:58:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:29:20.003 15:58:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:20.003 15:58:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:20.003 15:58:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:29:20.003 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:20.003 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:20.003 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:29:20.003 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:29:20.003 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:29:20.003 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:29:20.003 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:29:20.003 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:20.003 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:29:20.003 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:20.003 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:29:20.003 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:20.003 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:20.003 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:29:20.265 /dev/nbd1 00:29:20.265 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:20.265 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:20.265 15:58:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:29:20.265 15:58:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:29:20.265 15:58:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:20.265 15:58:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:20.265 15:58:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:29:20.265 15:58:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:29:20.265 15:58:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:20.265 15:58:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:20.265 15:58:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:20.265 1+0 records in 00:29:20.265 1+0 records out 00:29:20.265 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292924 s, 14.0 MB/s 00:29:20.265 15:58:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:20.265 15:58:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:29:20.265 15:58:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:20.265 15:58:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:20.265 15:58:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:29:20.265 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:20.265 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:20.265 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:29:20.265 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:29:20.265 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:29:20.265 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:29:20.265 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:20.265 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:29:20.265 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:20.265 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:29:20.526 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:20.526 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:20.526 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:20.526 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:20.526 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:20.526 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:20.526 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:29:20.526 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:20.526 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:29:20.526 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:29:20.526 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:20.526 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:20.526 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:29:20.526 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:20.526 15:58:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:29:20.787 15:58:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:20.787 15:58:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:20.787 15:58:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:20.787 15:58:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:20.787 15:58:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:20.787 15:58:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:20.787 15:58:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:29:20.787 15:58:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:20.787 15:58:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:29:20.787 15:58:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 74141 00:29:20.787 15:58:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 74141 ']' 00:29:20.787 15:58:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 74141 00:29:20.787 15:58:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:29:20.787 15:58:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:20.787 15:58:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74141 00:29:20.787 15:58:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:20.787 15:58:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:20.787 killing process with pid 74141 00:29:20.787 15:58:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74141' 00:29:20.787 15:58:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 74141 00:29:20.787 Received shutdown signal, test time was about 8.131650 seconds 00:29:20.787 00:29:20.787 Latency(us) 00:29:20.787 [2024-11-05T15:58:53.202Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.787 [2024-11-05T15:58:53.202Z] =================================================================================================================== 00:29:20.787 [2024-11-05T15:58:53.202Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:20.787 [2024-11-05 15:58:53.129349] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:20.787 15:58:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 74141 00:29:21.048 [2024-11-05 15:58:53.238883] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:21.621 15:58:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:29:21.621 00:29:21.621 real 0m10.413s 00:29:21.621 user 0m12.798s 00:29:21.621 sys 0m1.200s 00:29:21.621 15:58:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:21.621 ************************************ 00:29:21.621 END TEST raid_rebuild_test_io 00:29:21.621 ************************************ 00:29:21.621 15:58:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:21.621 15:58:53 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:29:21.621 15:58:53 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:29:21.621 15:58:53 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:21.621 15:58:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:21.621 ************************************ 00:29:21.621 START TEST raid_rebuild_test_sb_io 00:29:21.621 ************************************ 00:29:21.621 15:58:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true true true 00:29:21.621 15:58:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:29:21.621 15:58:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:29:21.621 15:58:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:29:21.621 15:58:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:29:21.621 15:58:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:29:21.621 15:58:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:29:21.621 15:58:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:21.621 15:58:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:29:21.621 15:58:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:21.621 15:58:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:21.621 15:58:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:29:21.621 15:58:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:21.621 15:58:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:21.621 15:58:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:29:21.621 15:58:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:29:21.621 15:58:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:29:21.621 15:58:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:29:21.621 15:58:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:29:21.621 15:58:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:29:21.621 15:58:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:29:21.621 15:58:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:29:21.621 15:58:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:29:21.621 15:58:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:29:21.621 15:58:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:29:21.621 15:58:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=74497 00:29:21.621 15:58:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 74497 00:29:21.621 15:58:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 74497 ']' 00:29:21.621 15:58:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:21.621 15:58:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:21.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:21.621 15:58:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:21.621 15:58:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:21.621 15:58:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:21.621 15:58:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:21.621 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:21.621 Zero copy mechanism will not be used. 00:29:21.621 [2024-11-05 15:58:53.950331] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:29:21.621 [2024-11-05 15:58:53.950456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74497 ] 00:29:21.882 [2024-11-05 15:58:54.109198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.882 [2024-11-05 15:58:54.233412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:22.143 [2024-11-05 15:58:54.392400] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:22.143 [2024-11-05 15:58:54.392449] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:22.410 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:22.410 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:29:22.410 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:22.410 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:22.410 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.410 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:22.410 BaseBdev1_malloc 00:29:22.410 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.410 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:22.410 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.410 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:22.672 [2024-11-05 15:58:54.832211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:22.672 [2024-11-05 15:58:54.832277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:22.672 [2024-11-05 15:58:54.832298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:29:22.672 [2024-11-05 15:58:54.832310] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:22.672 [2024-11-05 15:58:54.834448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:22.672 [2024-11-05 15:58:54.834487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:22.672 BaseBdev1 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:22.672 BaseBdev2_malloc 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:22.672 [2024-11-05 15:58:54.868137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:22.672 [2024-11-05 15:58:54.868189] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:22.672 [2024-11-05 15:58:54.868205] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:29:22.672 [2024-11-05 15:58:54.868218] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:22.672 [2024-11-05 15:58:54.870257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:22.672 [2024-11-05 15:58:54.870292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:22.672 BaseBdev2 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:22.672 spare_malloc 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:22.672 spare_delay 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:22.672 [2024-11-05 15:58:54.919572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:22.672 [2024-11-05 15:58:54.919739] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:22.672 [2024-11-05 15:58:54.919763] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:29:22.672 [2024-11-05 15:58:54.919774] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:22.672 [2024-11-05 15:58:54.921883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:22.672 [2024-11-05 15:58:54.921913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:22.672 spare 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:22.672 [2024-11-05 15:58:54.927633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:22.672 [2024-11-05 15:58:54.929428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:22.672 [2024-11-05 15:58:54.929581] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:29:22.672 [2024-11-05 15:58:54.929595] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:22.672 [2024-11-05 15:58:54.929833] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:29:22.672 [2024-11-05 15:58:54.929998] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:29:22.672 [2024-11-05 15:58:54.930039] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:29:22.672 [2024-11-05 15:58:54.930179] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:22.672 "name": "raid_bdev1", 00:29:22.672 "uuid": "6a5b75f6-a1b5-42f7-83dd-6859a58465af", 00:29:22.672 "strip_size_kb": 0, 00:29:22.672 "state": "online", 00:29:22.672 "raid_level": "raid1", 00:29:22.672 "superblock": true, 00:29:22.672 "num_base_bdevs": 2, 00:29:22.672 "num_base_bdevs_discovered": 2, 00:29:22.672 "num_base_bdevs_operational": 2, 00:29:22.672 "base_bdevs_list": [ 00:29:22.672 { 00:29:22.672 "name": "BaseBdev1", 00:29:22.672 "uuid": "9a61002b-129b-5248-8fa0-ff35105b3a63", 00:29:22.672 "is_configured": true, 00:29:22.672 "data_offset": 2048, 00:29:22.672 "data_size": 63488 00:29:22.672 }, 00:29:22.672 { 00:29:22.672 "name": "BaseBdev2", 00:29:22.672 "uuid": "b5e9962c-be48-5adc-bac3-06520eb8d5ae", 00:29:22.672 "is_configured": true, 00:29:22.672 "data_offset": 2048, 00:29:22.672 "data_size": 63488 00:29:22.672 } 00:29:22.672 ] 00:29:22.672 }' 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:22.672 15:58:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:22.934 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:29:22.934 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:22.934 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.934 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:22.934 [2024-11-05 15:58:55.255996] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:22.934 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.934 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:29:22.934 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:22.934 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:22.934 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.934 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:22.934 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.934 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:29:22.934 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:29:22.934 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:29:22.934 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:22.934 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.934 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:22.934 [2024-11-05 15:58:55.311683] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:22.934 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.934 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:22.934 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:22.934 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:22.934 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:22.934 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:22.934 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:22.934 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:22.934 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:22.934 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:22.934 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:22.934 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:22.934 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.934 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:22.934 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:22.934 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.934 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:22.934 "name": "raid_bdev1", 00:29:22.934 "uuid": "6a5b75f6-a1b5-42f7-83dd-6859a58465af", 00:29:22.934 "strip_size_kb": 0, 00:29:22.934 "state": "online", 00:29:22.934 "raid_level": "raid1", 00:29:22.934 "superblock": true, 00:29:22.934 "num_base_bdevs": 2, 00:29:22.934 "num_base_bdevs_discovered": 1, 00:29:22.934 "num_base_bdevs_operational": 1, 00:29:22.934 "base_bdevs_list": [ 00:29:22.934 { 00:29:22.934 "name": null, 00:29:22.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:22.934 "is_configured": false, 00:29:22.934 "data_offset": 0, 00:29:22.934 "data_size": 63488 00:29:22.934 }, 00:29:22.934 { 00:29:22.934 "name": "BaseBdev2", 00:29:22.934 "uuid": "b5e9962c-be48-5adc-bac3-06520eb8d5ae", 00:29:22.934 "is_configured": true, 00:29:22.934 "data_offset": 2048, 00:29:22.934 "data_size": 63488 00:29:22.934 } 00:29:22.934 ] 00:29:22.934 }' 00:29:22.934 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:22.934 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:23.195 [2024-11-05 15:58:55.397023] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:29:23.195 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:23.195 Zero copy mechanism will not be used. 00:29:23.195 Running I/O for 60 seconds... 00:29:23.456 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:29:23.456 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.456 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:23.456 [2024-11-05 15:58:55.627336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:23.456 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.456 15:58:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:29:23.456 [2024-11-05 15:58:55.701814] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:29:23.456 [2024-11-05 15:58:55.703747] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:23.456 [2024-11-05 15:58:55.818339] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:23.456 [2024-11-05 15:58:55.818749] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:23.717 [2024-11-05 15:58:56.040833] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:23.717 [2024-11-05 15:58:56.041089] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:23.977 [2024-11-05 15:58:56.367553] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:23.977 [2024-11-05 15:58:56.368033] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:24.251 154.00 IOPS, 462.00 MiB/s [2024-11-05T15:58:56.666Z] [2024-11-05 15:58:56.505216] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:24.523 15:58:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:24.523 15:58:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:24.523 15:58:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:24.523 15:58:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:24.523 15:58:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:24.523 15:58:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:24.523 15:58:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:24.523 15:58:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.523 15:58:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:24.523 15:58:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.523 15:58:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:24.523 "name": "raid_bdev1", 00:29:24.523 "uuid": "6a5b75f6-a1b5-42f7-83dd-6859a58465af", 00:29:24.523 "strip_size_kb": 0, 00:29:24.523 "state": "online", 00:29:24.523 "raid_level": "raid1", 00:29:24.523 "superblock": true, 00:29:24.523 "num_base_bdevs": 2, 00:29:24.523 "num_base_bdevs_discovered": 2, 00:29:24.523 "num_base_bdevs_operational": 2, 00:29:24.523 "process": { 00:29:24.523 "type": "rebuild", 00:29:24.523 "target": "spare", 00:29:24.523 "progress": { 00:29:24.523 "blocks": 10240, 00:29:24.523 "percent": 16 00:29:24.523 } 00:29:24.523 }, 00:29:24.523 "base_bdevs_list": [ 00:29:24.523 { 00:29:24.523 "name": "spare", 00:29:24.523 "uuid": "d7f3e7a4-8ab9-5e8c-83d6-f16aead2fec8", 00:29:24.523 "is_configured": true, 00:29:24.523 "data_offset": 2048, 00:29:24.523 "data_size": 63488 00:29:24.523 }, 00:29:24.523 { 00:29:24.523 "name": "BaseBdev2", 00:29:24.523 "uuid": "b5e9962c-be48-5adc-bac3-06520eb8d5ae", 00:29:24.523 "is_configured": true, 00:29:24.523 "data_offset": 2048, 00:29:24.523 "data_size": 63488 00:29:24.523 } 00:29:24.523 ] 00:29:24.523 }' 00:29:24.523 15:58:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:24.523 15:58:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:24.523 15:58:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:24.523 15:58:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:24.523 15:58:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:29:24.523 15:58:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.523 15:58:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:24.523 [2024-11-05 15:58:56.786760] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:24.523 [2024-11-05 15:58:56.831716] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:24.523 [2024-11-05 15:58:56.832259] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:24.523 [2024-11-05 15:58:56.839654] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:24.523 [2024-11-05 15:58:56.848033] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:24.523 [2024-11-05 15:58:56.848141] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:24.523 [2024-11-05 15:58:56.848171] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:24.523 [2024-11-05 15:58:56.880877] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:29:24.523 15:58:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.523 15:58:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:24.523 15:58:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:24.523 15:58:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:24.523 15:58:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:24.523 15:58:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:24.523 15:58:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:24.523 15:58:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:24.523 15:58:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:24.523 15:58:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:24.523 15:58:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:24.523 15:58:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:24.523 15:58:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:24.523 15:58:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.523 15:58:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:24.523 15:58:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.523 15:58:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:24.523 "name": "raid_bdev1", 00:29:24.523 "uuid": "6a5b75f6-a1b5-42f7-83dd-6859a58465af", 00:29:24.523 "strip_size_kb": 0, 00:29:24.523 "state": "online", 00:29:24.523 "raid_level": "raid1", 00:29:24.523 "superblock": true, 00:29:24.523 "num_base_bdevs": 2, 00:29:24.523 "num_base_bdevs_discovered": 1, 00:29:24.523 "num_base_bdevs_operational": 1, 00:29:24.523 "base_bdevs_list": [ 00:29:24.523 { 00:29:24.523 "name": null, 00:29:24.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:24.523 "is_configured": false, 00:29:24.523 "data_offset": 0, 00:29:24.523 "data_size": 63488 00:29:24.523 }, 00:29:24.523 { 00:29:24.523 "name": "BaseBdev2", 00:29:24.523 "uuid": "b5e9962c-be48-5adc-bac3-06520eb8d5ae", 00:29:24.523 "is_configured": true, 00:29:24.523 "data_offset": 2048, 00:29:24.523 "data_size": 63488 00:29:24.523 } 00:29:24.523 ] 00:29:24.523 }' 00:29:24.523 15:58:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:24.523 15:58:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:25.093 15:58:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:25.093 15:58:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:25.093 15:58:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:25.093 15:58:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:25.093 15:58:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:25.093 15:58:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:25.093 15:58:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.093 15:58:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:25.093 15:58:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:25.093 15:58:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.093 15:58:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:25.093 "name": "raid_bdev1", 00:29:25.093 "uuid": "6a5b75f6-a1b5-42f7-83dd-6859a58465af", 00:29:25.093 "strip_size_kb": 0, 00:29:25.093 "state": "online", 00:29:25.093 "raid_level": "raid1", 00:29:25.093 "superblock": true, 00:29:25.093 "num_base_bdevs": 2, 00:29:25.093 "num_base_bdevs_discovered": 1, 00:29:25.093 "num_base_bdevs_operational": 1, 00:29:25.093 "base_bdevs_list": [ 00:29:25.093 { 00:29:25.093 "name": null, 00:29:25.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:25.093 "is_configured": false, 00:29:25.093 "data_offset": 0, 00:29:25.093 "data_size": 63488 00:29:25.093 }, 00:29:25.093 { 00:29:25.093 "name": "BaseBdev2", 00:29:25.093 "uuid": "b5e9962c-be48-5adc-bac3-06520eb8d5ae", 00:29:25.093 "is_configured": true, 00:29:25.093 "data_offset": 2048, 00:29:25.093 "data_size": 63488 00:29:25.093 } 00:29:25.093 ] 00:29:25.093 }' 00:29:25.093 15:58:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:25.093 15:58:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:25.093 15:58:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:25.093 15:58:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:25.093 15:58:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:29:25.093 15:58:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.093 15:58:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:25.093 [2024-11-05 15:58:57.322327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:25.093 15:58:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.093 15:58:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:29:25.093 [2024-11-05 15:58:57.396221] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:29:25.093 [2024-11-05 15:58:57.398135] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:25.350 163.50 IOPS, 490.50 MiB/s [2024-11-05T15:58:57.765Z] [2024-11-05 15:58:57.521262] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:25.350 [2024-11-05 15:58:57.521680] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:25.350 [2024-11-05 15:58:57.739621] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:25.350 [2024-11-05 15:58:57.739838] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:25.915 [2024-11-05 15:58:58.074749] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:25.915 [2024-11-05 15:58:58.079887] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:25.915 [2024-11-05 15:58:58.308174] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:26.173 15:58:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:26.173 15:58:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:26.173 15:58:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:26.173 15:58:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:26.173 15:58:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:26.173 15:58:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:26.173 15:58:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:26.173 15:58:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.173 15:58:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:26.173 15:58:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.173 15:58:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:26.173 "name": "raid_bdev1", 00:29:26.173 "uuid": "6a5b75f6-a1b5-42f7-83dd-6859a58465af", 00:29:26.173 "strip_size_kb": 0, 00:29:26.173 "state": "online", 00:29:26.173 "raid_level": "raid1", 00:29:26.173 "superblock": true, 00:29:26.173 "num_base_bdevs": 2, 00:29:26.173 "num_base_bdevs_discovered": 2, 00:29:26.173 "num_base_bdevs_operational": 2, 00:29:26.173 "process": { 00:29:26.173 "type": "rebuild", 00:29:26.173 "target": "spare", 00:29:26.173 "progress": { 00:29:26.173 "blocks": 10240, 00:29:26.173 "percent": 16 00:29:26.173 } 00:29:26.173 }, 00:29:26.173 "base_bdevs_list": [ 00:29:26.173 { 00:29:26.173 "name": "spare", 00:29:26.173 "uuid": "d7f3e7a4-8ab9-5e8c-83d6-f16aead2fec8", 00:29:26.173 "is_configured": true, 00:29:26.173 "data_offset": 2048, 00:29:26.173 "data_size": 63488 00:29:26.173 }, 00:29:26.173 { 00:29:26.173 "name": "BaseBdev2", 00:29:26.173 "uuid": "b5e9962c-be48-5adc-bac3-06520eb8d5ae", 00:29:26.173 "is_configured": true, 00:29:26.173 "data_offset": 2048, 00:29:26.173 "data_size": 63488 00:29:26.173 } 00:29:26.173 ] 00:29:26.173 }' 00:29:26.173 15:58:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:26.173 142.67 IOPS, 428.00 MiB/s [2024-11-05T15:58:58.588Z] 15:58:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:26.173 15:58:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:26.173 15:58:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:26.173 15:58:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:29:26.173 15:58:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:29:26.173 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:29:26.173 15:58:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:29:26.173 15:58:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:29:26.173 15:58:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:29:26.173 15:58:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=314 00:29:26.173 15:58:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:26.173 15:58:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:26.173 15:58:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:26.173 15:58:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:26.173 15:58:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:26.173 15:58:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:26.173 15:58:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:26.173 15:58:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:26.173 15:58:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.173 15:58:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:26.173 15:58:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.173 15:58:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:26.173 "name": "raid_bdev1", 00:29:26.173 "uuid": "6a5b75f6-a1b5-42f7-83dd-6859a58465af", 00:29:26.173 "strip_size_kb": 0, 00:29:26.173 "state": "online", 00:29:26.173 "raid_level": "raid1", 00:29:26.173 "superblock": true, 00:29:26.173 "num_base_bdevs": 2, 00:29:26.173 "num_base_bdevs_discovered": 2, 00:29:26.173 "num_base_bdevs_operational": 2, 00:29:26.173 "process": { 00:29:26.173 "type": "rebuild", 00:29:26.173 "target": "spare", 00:29:26.173 "progress": { 00:29:26.173 "blocks": 12288, 00:29:26.173 "percent": 19 00:29:26.173 } 00:29:26.173 }, 00:29:26.173 "base_bdevs_list": [ 00:29:26.173 { 00:29:26.173 "name": "spare", 00:29:26.173 "uuid": "d7f3e7a4-8ab9-5e8c-83d6-f16aead2fec8", 00:29:26.173 "is_configured": true, 00:29:26.173 "data_offset": 2048, 00:29:26.173 "data_size": 63488 00:29:26.173 }, 00:29:26.173 { 00:29:26.173 "name": "BaseBdev2", 00:29:26.173 "uuid": "b5e9962c-be48-5adc-bac3-06520eb8d5ae", 00:29:26.173 "is_configured": true, 00:29:26.173 "data_offset": 2048, 00:29:26.173 "data_size": 63488 00:29:26.173 } 00:29:26.173 ] 00:29:26.173 }' 00:29:26.173 15:58:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:26.173 15:58:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:26.173 15:58:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:26.173 [2024-11-05 15:58:58.527234] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:26.173 15:58:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:26.173 15:58:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:26.431 [2024-11-05 15:58:58.728577] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:29:26.431 [2024-11-05 15:58:58.728801] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:29:26.688 [2024-11-05 15:58:58.953204] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:29:26.688 [2024-11-05 15:58:58.953655] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:29:26.945 [2024-11-05 15:58:59.176071] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:29:26.945 [2024-11-05 15:58:59.176403] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:29:27.204 120.25 IOPS, 360.75 MiB/s [2024-11-05T15:58:59.619Z] [2024-11-05 15:58:59.420060] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:29:27.204 [2024-11-05 15:58:59.536298] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:29:27.204 15:58:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:27.204 15:58:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:27.204 15:58:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:27.204 15:58:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:27.204 15:58:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:27.204 15:58:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:27.204 15:58:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:27.204 15:58:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.204 15:58:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:27.204 15:58:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:27.204 15:58:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.204 15:58:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:27.204 "name": "raid_bdev1", 00:29:27.204 "uuid": "6a5b75f6-a1b5-42f7-83dd-6859a58465af", 00:29:27.204 "strip_size_kb": 0, 00:29:27.204 "state": "online", 00:29:27.204 "raid_level": "raid1", 00:29:27.204 "superblock": true, 00:29:27.204 "num_base_bdevs": 2, 00:29:27.204 "num_base_bdevs_discovered": 2, 00:29:27.204 "num_base_bdevs_operational": 2, 00:29:27.204 "process": { 00:29:27.204 "type": "rebuild", 00:29:27.204 "target": "spare", 00:29:27.204 "progress": { 00:29:27.204 "blocks": 28672, 00:29:27.204 "percent": 45 00:29:27.204 } 00:29:27.205 }, 00:29:27.205 "base_bdevs_list": [ 00:29:27.205 { 00:29:27.205 "name": "spare", 00:29:27.205 "uuid": "d7f3e7a4-8ab9-5e8c-83d6-f16aead2fec8", 00:29:27.205 "is_configured": true, 00:29:27.205 "data_offset": 2048, 00:29:27.205 "data_size": 63488 00:29:27.205 }, 00:29:27.205 { 00:29:27.205 "name": "BaseBdev2", 00:29:27.205 "uuid": "b5e9962c-be48-5adc-bac3-06520eb8d5ae", 00:29:27.205 "is_configured": true, 00:29:27.205 "data_offset": 2048, 00:29:27.205 "data_size": 63488 00:29:27.205 } 00:29:27.205 ] 00:29:27.205 }' 00:29:27.205 15:58:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:27.462 15:58:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:27.462 15:58:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:27.462 15:58:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:27.462 15:58:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:27.720 [2024-11-05 15:58:59.887615] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:29:27.720 [2024-11-05 15:59:00.000180] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:29:28.287 108.60 IOPS, 325.80 MiB/s [2024-11-05T15:59:00.702Z] 15:59:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:28.287 15:59:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:28.287 15:59:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:28.287 15:59:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:28.287 15:59:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:28.287 15:59:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:28.287 15:59:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:28.287 15:59:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:28.287 15:59:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.287 15:59:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:28.287 15:59:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.287 15:59:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:28.287 "name": "raid_bdev1", 00:29:28.287 "uuid": "6a5b75f6-a1b5-42f7-83dd-6859a58465af", 00:29:28.287 "strip_size_kb": 0, 00:29:28.287 "state": "online", 00:29:28.287 "raid_level": "raid1", 00:29:28.287 "superblock": true, 00:29:28.287 "num_base_bdevs": 2, 00:29:28.287 "num_base_bdevs_discovered": 2, 00:29:28.287 "num_base_bdevs_operational": 2, 00:29:28.287 "process": { 00:29:28.287 "type": "rebuild", 00:29:28.287 "target": "spare", 00:29:28.287 "progress": { 00:29:28.287 "blocks": 45056, 00:29:28.287 "percent": 70 00:29:28.287 } 00:29:28.287 }, 00:29:28.287 "base_bdevs_list": [ 00:29:28.287 { 00:29:28.287 "name": "spare", 00:29:28.287 "uuid": "d7f3e7a4-8ab9-5e8c-83d6-f16aead2fec8", 00:29:28.287 "is_configured": true, 00:29:28.287 "data_offset": 2048, 00:29:28.287 "data_size": 63488 00:29:28.287 }, 00:29:28.287 { 00:29:28.287 "name": "BaseBdev2", 00:29:28.287 "uuid": "b5e9962c-be48-5adc-bac3-06520eb8d5ae", 00:29:28.287 "is_configured": true, 00:29:28.287 "data_offset": 2048, 00:29:28.287 "data_size": 63488 00:29:28.287 } 00:29:28.287 ] 00:29:28.287 }' 00:29:28.287 15:59:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:28.546 15:59:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:28.546 15:59:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:28.546 [2024-11-05 15:59:00.753520] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:29:28.546 15:59:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:28.546 15:59:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:28.803 [2024-11-05 15:59:00.989891] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:29:29.061 97.67 IOPS, 293.00 MiB/s [2024-11-05T15:59:01.476Z] [2024-11-05 15:59:01.421666] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:29:29.061 [2024-11-05 15:59:01.422057] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:29:29.624 15:59:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:29.624 15:59:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:29.624 15:59:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:29.624 15:59:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:29.624 15:59:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:29.624 15:59:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:29.624 15:59:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:29.624 15:59:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.624 15:59:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:29.624 15:59:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:29.624 15:59:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.624 15:59:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:29.624 "name": "raid_bdev1", 00:29:29.624 "uuid": "6a5b75f6-a1b5-42f7-83dd-6859a58465af", 00:29:29.624 "strip_size_kb": 0, 00:29:29.624 "state": "online", 00:29:29.624 "raid_level": "raid1", 00:29:29.624 "superblock": true, 00:29:29.624 "num_base_bdevs": 2, 00:29:29.624 "num_base_bdevs_discovered": 2, 00:29:29.624 "num_base_bdevs_operational": 2, 00:29:29.624 "process": { 00:29:29.624 "type": "rebuild", 00:29:29.624 "target": "spare", 00:29:29.624 "progress": { 00:29:29.624 "blocks": 61440, 00:29:29.624 "percent": 96 00:29:29.624 } 00:29:29.624 }, 00:29:29.624 "base_bdevs_list": [ 00:29:29.624 { 00:29:29.624 "name": "spare", 00:29:29.624 "uuid": "d7f3e7a4-8ab9-5e8c-83d6-f16aead2fec8", 00:29:29.624 "is_configured": true, 00:29:29.624 "data_offset": 2048, 00:29:29.624 "data_size": 63488 00:29:29.624 }, 00:29:29.624 { 00:29:29.624 "name": "BaseBdev2", 00:29:29.624 "uuid": "b5e9962c-be48-5adc-bac3-06520eb8d5ae", 00:29:29.624 "is_configured": true, 00:29:29.624 "data_offset": 2048, 00:29:29.624 "data_size": 63488 00:29:29.624 } 00:29:29.624 ] 00:29:29.624 }' 00:29:29.624 15:59:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:29.624 15:59:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:29.624 15:59:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:29.624 [2024-11-05 15:59:01.858409] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:29.624 15:59:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:29.624 15:59:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:29.624 [2024-11-05 15:59:01.957001] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:29.624 [2024-11-05 15:59:01.958623] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:30.753 88.86 IOPS, 266.57 MiB/s [2024-11-05T15:59:03.168Z] 15:59:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:30.753 15:59:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:30.753 15:59:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:30.753 15:59:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:30.753 15:59:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:30.753 15:59:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:30.753 15:59:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:30.753 15:59:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:30.753 15:59:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.753 15:59:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:30.753 15:59:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.753 15:59:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:30.753 "name": "raid_bdev1", 00:29:30.753 "uuid": "6a5b75f6-a1b5-42f7-83dd-6859a58465af", 00:29:30.753 "strip_size_kb": 0, 00:29:30.753 "state": "online", 00:29:30.753 "raid_level": "raid1", 00:29:30.753 "superblock": true, 00:29:30.753 "num_base_bdevs": 2, 00:29:30.753 "num_base_bdevs_discovered": 2, 00:29:30.753 "num_base_bdevs_operational": 2, 00:29:30.753 "base_bdevs_list": [ 00:29:30.753 { 00:29:30.753 "name": "spare", 00:29:30.753 "uuid": "d7f3e7a4-8ab9-5e8c-83d6-f16aead2fec8", 00:29:30.753 "is_configured": true, 00:29:30.753 "data_offset": 2048, 00:29:30.753 "data_size": 63488 00:29:30.753 }, 00:29:30.753 { 00:29:30.753 "name": "BaseBdev2", 00:29:30.753 "uuid": "b5e9962c-be48-5adc-bac3-06520eb8d5ae", 00:29:30.753 "is_configured": true, 00:29:30.753 "data_offset": 2048, 00:29:30.753 "data_size": 63488 00:29:30.753 } 00:29:30.753 ] 00:29:30.753 }' 00:29:30.753 15:59:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:30.753 15:59:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:30.753 15:59:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:30.753 15:59:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:29:30.753 15:59:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:29:30.753 15:59:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:30.753 15:59:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:30.753 15:59:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:30.753 15:59:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:30.753 15:59:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:30.753 15:59:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:30.753 15:59:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:30.753 15:59:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.753 15:59:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:30.753 15:59:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.753 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:30.753 "name": "raid_bdev1", 00:29:30.753 "uuid": "6a5b75f6-a1b5-42f7-83dd-6859a58465af", 00:29:30.753 "strip_size_kb": 0, 00:29:30.753 "state": "online", 00:29:30.753 "raid_level": "raid1", 00:29:30.753 "superblock": true, 00:29:30.753 "num_base_bdevs": 2, 00:29:30.753 "num_base_bdevs_discovered": 2, 00:29:30.753 "num_base_bdevs_operational": 2, 00:29:30.753 "base_bdevs_list": [ 00:29:30.753 { 00:29:30.753 "name": "spare", 00:29:30.753 "uuid": "d7f3e7a4-8ab9-5e8c-83d6-f16aead2fec8", 00:29:30.753 "is_configured": true, 00:29:30.753 "data_offset": 2048, 00:29:30.753 "data_size": 63488 00:29:30.753 }, 00:29:30.753 { 00:29:30.753 "name": "BaseBdev2", 00:29:30.753 "uuid": "b5e9962c-be48-5adc-bac3-06520eb8d5ae", 00:29:30.753 "is_configured": true, 00:29:30.753 "data_offset": 2048, 00:29:30.753 "data_size": 63488 00:29:30.753 } 00:29:30.753 ] 00:29:30.753 }' 00:29:30.753 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:30.753 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:30.754 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:30.754 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:30.754 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:30.754 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:30.754 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:30.754 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:30.754 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:30.754 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:30.754 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:30.754 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:30.754 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:30.754 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:30.754 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:30.754 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.754 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:30.754 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:30.754 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.754 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:30.754 "name": "raid_bdev1", 00:29:30.754 "uuid": "6a5b75f6-a1b5-42f7-83dd-6859a58465af", 00:29:30.754 "strip_size_kb": 0, 00:29:30.754 "state": "online", 00:29:30.754 "raid_level": "raid1", 00:29:30.754 "superblock": true, 00:29:30.754 "num_base_bdevs": 2, 00:29:30.754 "num_base_bdevs_discovered": 2, 00:29:30.754 "num_base_bdevs_operational": 2, 00:29:30.754 "base_bdevs_list": [ 00:29:30.754 { 00:29:30.754 "name": "spare", 00:29:30.754 "uuid": "d7f3e7a4-8ab9-5e8c-83d6-f16aead2fec8", 00:29:30.754 "is_configured": true, 00:29:30.754 "data_offset": 2048, 00:29:30.754 "data_size": 63488 00:29:30.754 }, 00:29:30.754 { 00:29:30.754 "name": "BaseBdev2", 00:29:30.754 "uuid": "b5e9962c-be48-5adc-bac3-06520eb8d5ae", 00:29:30.754 "is_configured": true, 00:29:30.754 "data_offset": 2048, 00:29:30.754 "data_size": 63488 00:29:30.754 } 00:29:30.754 ] 00:29:30.754 }' 00:29:30.754 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:30.754 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:31.011 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:31.011 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.011 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:31.011 [2024-11-05 15:59:03.398562] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:31.011 [2024-11-05 15:59:03.398588] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:31.269 82.38 IOPS, 247.12 MiB/s 00:29:31.269 Latency(us) 00:29:31.269 [2024-11-05T15:59:03.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:31.269 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:29:31.269 raid_bdev1 : 8.05 82.14 246.42 0.00 0.00 16400.55 258.36 115343.36 00:29:31.269 [2024-11-05T15:59:03.684Z] =================================================================================================================== 00:29:31.269 [2024-11-05T15:59:03.684Z] Total : 82.14 246.42 0.00 0.00 16400.55 258.36 115343.36 00:29:31.269 [2024-11-05 15:59:03.458144] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:31.269 [2024-11-05 15:59:03.458264] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:31.269 [2024-11-05 15:59:03.458345] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:31.269 [2024-11-05 15:59:03.458506] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:29:31.269 { 00:29:31.269 "results": [ 00:29:31.269 { 00:29:31.269 "job": "raid_bdev1", 00:29:31.269 "core_mask": "0x1", 00:29:31.269 "workload": "randrw", 00:29:31.269 "percentage": 50, 00:29:31.269 "status": "finished", 00:29:31.269 "queue_depth": 2, 00:29:31.269 "io_size": 3145728, 00:29:31.269 "runtime": 8.047324, 00:29:31.269 "iops": 82.13910611776039, 00:29:31.269 "mibps": 246.4173183532812, 00:29:31.269 "io_failed": 0, 00:29:31.269 "io_timeout": 0, 00:29:31.269 "avg_latency_us": 16400.548688467356, 00:29:31.269 "min_latency_us": 258.3630769230769, 00:29:31.269 "max_latency_us": 115343.36 00:29:31.269 } 00:29:31.269 ], 00:29:31.269 "core_count": 1 00:29:31.269 } 00:29:31.269 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.269 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:31.269 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:29:31.269 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.269 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:31.269 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.269 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:29:31.269 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:29:31.269 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:29:31.269 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:29:31.269 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:29:31.270 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:29:31.270 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:31.270 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:31.270 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:31.270 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:29:31.270 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:31.270 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:31.270 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:29:31.528 /dev/nbd0 00:29:31.528 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:31.528 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:31.528 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:29:31.528 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:29:31.528 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:31.528 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:31.528 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:29:31.528 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:29:31.528 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:31.528 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:31.528 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:31.528 1+0 records in 00:29:31.528 1+0 records out 00:29:31.528 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257585 s, 15.9 MB/s 00:29:31.528 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:31.528 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:29:31.528 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:31.528 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:31.528 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:29:31.528 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:31.528 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:31.528 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:29:31.528 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:29:31.528 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:29:31.528 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:29:31.528 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:29:31.528 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:31.528 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:29:31.528 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:31.528 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:29:31.528 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:31.528 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:31.528 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:29:31.528 /dev/nbd1 00:29:31.786 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:31.786 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:31.786 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:29:31.786 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:29:31.786 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:31.786 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:31.786 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:29:31.786 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:29:31.786 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:31.786 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:31.786 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:31.786 1+0 records in 00:29:31.786 1+0 records out 00:29:31.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281115 s, 14.6 MB/s 00:29:31.786 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:31.786 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:29:31.786 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:31.786 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:31.786 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:29:31.786 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:31.786 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:31.786 15:59:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:29:31.786 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:29:31.786 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:29:31.786 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:29:31.786 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:31.786 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:29:31.786 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:31.786 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:29:32.044 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:32.045 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:32.045 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:32.045 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:32.045 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:32.045 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:32.045 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:29:32.045 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:32.045 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:29:32.045 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:29:32.045 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:32.045 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:32.045 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:29:32.045 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:32.045 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:29:32.045 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:32.045 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:32.045 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:32.045 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:32.045 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:32.045 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:32.045 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:29:32.045 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:32.045 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:29:32.045 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:29:32.045 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.045 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:32.045 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.045 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:29:32.045 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.045 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:32.303 [2024-11-05 15:59:04.463553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:32.303 [2024-11-05 15:59:04.463600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:32.303 [2024-11-05 15:59:04.463615] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:29:32.303 [2024-11-05 15:59:04.463624] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:32.303 [2024-11-05 15:59:04.465402] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:32.303 [2024-11-05 15:59:04.465436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:32.303 [2024-11-05 15:59:04.465506] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:32.303 [2024-11-05 15:59:04.465546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:32.303 [2024-11-05 15:59:04.465647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:32.303 spare 00:29:32.303 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.303 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:29:32.303 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.303 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:32.303 [2024-11-05 15:59:04.565731] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:29:32.303 [2024-11-05 15:59:04.565762] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:32.303 [2024-11-05 15:59:04.566029] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:29:32.303 [2024-11-05 15:59:04.566176] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:29:32.303 [2024-11-05 15:59:04.566192] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:29:32.303 [2024-11-05 15:59:04.566329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:32.303 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.304 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:32.304 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:32.304 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:32.304 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:32.304 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:32.304 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:32.304 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:32.304 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:32.304 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:32.304 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:32.304 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:32.304 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.304 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:32.304 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:32.304 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.304 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:32.304 "name": "raid_bdev1", 00:29:32.304 "uuid": "6a5b75f6-a1b5-42f7-83dd-6859a58465af", 00:29:32.304 "strip_size_kb": 0, 00:29:32.304 "state": "online", 00:29:32.304 "raid_level": "raid1", 00:29:32.304 "superblock": true, 00:29:32.304 "num_base_bdevs": 2, 00:29:32.304 "num_base_bdevs_discovered": 2, 00:29:32.304 "num_base_bdevs_operational": 2, 00:29:32.304 "base_bdevs_list": [ 00:29:32.304 { 00:29:32.304 "name": "spare", 00:29:32.304 "uuid": "d7f3e7a4-8ab9-5e8c-83d6-f16aead2fec8", 00:29:32.304 "is_configured": true, 00:29:32.304 "data_offset": 2048, 00:29:32.304 "data_size": 63488 00:29:32.304 }, 00:29:32.304 { 00:29:32.304 "name": "BaseBdev2", 00:29:32.304 "uuid": "b5e9962c-be48-5adc-bac3-06520eb8d5ae", 00:29:32.304 "is_configured": true, 00:29:32.304 "data_offset": 2048, 00:29:32.304 "data_size": 63488 00:29:32.304 } 00:29:32.304 ] 00:29:32.304 }' 00:29:32.304 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:32.304 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:32.562 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:32.562 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:32.562 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:32.562 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:32.562 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:32.562 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:32.562 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:32.562 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.562 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:32.562 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.562 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:32.562 "name": "raid_bdev1", 00:29:32.562 "uuid": "6a5b75f6-a1b5-42f7-83dd-6859a58465af", 00:29:32.562 "strip_size_kb": 0, 00:29:32.562 "state": "online", 00:29:32.562 "raid_level": "raid1", 00:29:32.562 "superblock": true, 00:29:32.562 "num_base_bdevs": 2, 00:29:32.562 "num_base_bdevs_discovered": 2, 00:29:32.562 "num_base_bdevs_operational": 2, 00:29:32.562 "base_bdevs_list": [ 00:29:32.562 { 00:29:32.562 "name": "spare", 00:29:32.562 "uuid": "d7f3e7a4-8ab9-5e8c-83d6-f16aead2fec8", 00:29:32.562 "is_configured": true, 00:29:32.562 "data_offset": 2048, 00:29:32.562 "data_size": 63488 00:29:32.562 }, 00:29:32.562 { 00:29:32.562 "name": "BaseBdev2", 00:29:32.562 "uuid": "b5e9962c-be48-5adc-bac3-06520eb8d5ae", 00:29:32.562 "is_configured": true, 00:29:32.562 "data_offset": 2048, 00:29:32.562 "data_size": 63488 00:29:32.562 } 00:29:32.562 ] 00:29:32.562 }' 00:29:32.562 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:32.562 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:32.562 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:32.562 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:32.562 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:32.562 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.562 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:32.562 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:29:32.820 15:59:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.820 15:59:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:29:32.820 15:59:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:29:32.820 15:59:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.820 15:59:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:32.820 [2024-11-05 15:59:05.011762] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:32.820 15:59:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.820 15:59:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:32.820 15:59:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:32.820 15:59:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:32.820 15:59:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:32.820 15:59:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:32.820 15:59:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:32.820 15:59:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:32.820 15:59:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:32.820 15:59:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:32.820 15:59:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:32.820 15:59:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:32.820 15:59:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:32.820 15:59:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.820 15:59:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:32.820 15:59:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.820 15:59:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:32.820 "name": "raid_bdev1", 00:29:32.820 "uuid": "6a5b75f6-a1b5-42f7-83dd-6859a58465af", 00:29:32.820 "strip_size_kb": 0, 00:29:32.820 "state": "online", 00:29:32.820 "raid_level": "raid1", 00:29:32.820 "superblock": true, 00:29:32.820 "num_base_bdevs": 2, 00:29:32.820 "num_base_bdevs_discovered": 1, 00:29:32.820 "num_base_bdevs_operational": 1, 00:29:32.820 "base_bdevs_list": [ 00:29:32.820 { 00:29:32.820 "name": null, 00:29:32.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:32.820 "is_configured": false, 00:29:32.820 "data_offset": 0, 00:29:32.820 "data_size": 63488 00:29:32.820 }, 00:29:32.820 { 00:29:32.820 "name": "BaseBdev2", 00:29:32.820 "uuid": "b5e9962c-be48-5adc-bac3-06520eb8d5ae", 00:29:32.820 "is_configured": true, 00:29:32.820 "data_offset": 2048, 00:29:32.820 "data_size": 63488 00:29:32.820 } 00:29:32.820 ] 00:29:32.820 }' 00:29:32.820 15:59:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:32.820 15:59:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:33.078 15:59:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:29:33.078 15:59:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.078 15:59:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:33.078 [2024-11-05 15:59:05.343879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:33.078 [2024-11-05 15:59:05.344026] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:29:33.078 [2024-11-05 15:59:05.344037] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:33.078 [2024-11-05 15:59:05.344073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:33.078 [2024-11-05 15:59:05.352986] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:29:33.078 15:59:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.078 15:59:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:29:33.078 [2024-11-05 15:59:05.354489] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:34.011 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:34.011 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:34.011 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:34.011 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:34.011 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:34.011 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:34.011 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:34.011 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.011 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:34.011 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.011 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:34.011 "name": "raid_bdev1", 00:29:34.011 "uuid": "6a5b75f6-a1b5-42f7-83dd-6859a58465af", 00:29:34.011 "strip_size_kb": 0, 00:29:34.011 "state": "online", 00:29:34.011 "raid_level": "raid1", 00:29:34.011 "superblock": true, 00:29:34.011 "num_base_bdevs": 2, 00:29:34.011 "num_base_bdevs_discovered": 2, 00:29:34.011 "num_base_bdevs_operational": 2, 00:29:34.011 "process": { 00:29:34.011 "type": "rebuild", 00:29:34.011 "target": "spare", 00:29:34.011 "progress": { 00:29:34.011 "blocks": 20480, 00:29:34.011 "percent": 32 00:29:34.011 } 00:29:34.011 }, 00:29:34.011 "base_bdevs_list": [ 00:29:34.011 { 00:29:34.011 "name": "spare", 00:29:34.011 "uuid": "d7f3e7a4-8ab9-5e8c-83d6-f16aead2fec8", 00:29:34.011 "is_configured": true, 00:29:34.011 "data_offset": 2048, 00:29:34.011 "data_size": 63488 00:29:34.011 }, 00:29:34.011 { 00:29:34.011 "name": "BaseBdev2", 00:29:34.012 "uuid": "b5e9962c-be48-5adc-bac3-06520eb8d5ae", 00:29:34.012 "is_configured": true, 00:29:34.012 "data_offset": 2048, 00:29:34.012 "data_size": 63488 00:29:34.012 } 00:29:34.012 ] 00:29:34.012 }' 00:29:34.012 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:34.012 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:34.269 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:34.269 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:34.269 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:29:34.269 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.269 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:34.269 [2024-11-05 15:59:06.456799] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:34.269 [2024-11-05 15:59:06.459403] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:34.269 [2024-11-05 15:59:06.459449] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:34.269 [2024-11-05 15:59:06.459463] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:34.269 [2024-11-05 15:59:06.459469] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:34.269 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.269 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:34.269 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:34.269 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:34.269 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:34.269 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:34.269 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:34.269 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:34.269 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:34.269 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:34.269 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:34.269 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:34.269 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:34.269 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.269 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:34.269 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.269 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:34.269 "name": "raid_bdev1", 00:29:34.269 "uuid": "6a5b75f6-a1b5-42f7-83dd-6859a58465af", 00:29:34.269 "strip_size_kb": 0, 00:29:34.269 "state": "online", 00:29:34.269 "raid_level": "raid1", 00:29:34.269 "superblock": true, 00:29:34.269 "num_base_bdevs": 2, 00:29:34.269 "num_base_bdevs_discovered": 1, 00:29:34.269 "num_base_bdevs_operational": 1, 00:29:34.269 "base_bdevs_list": [ 00:29:34.269 { 00:29:34.269 "name": null, 00:29:34.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:34.269 "is_configured": false, 00:29:34.269 "data_offset": 0, 00:29:34.269 "data_size": 63488 00:29:34.269 }, 00:29:34.269 { 00:29:34.269 "name": "BaseBdev2", 00:29:34.269 "uuid": "b5e9962c-be48-5adc-bac3-06520eb8d5ae", 00:29:34.269 "is_configured": true, 00:29:34.269 "data_offset": 2048, 00:29:34.269 "data_size": 63488 00:29:34.269 } 00:29:34.269 ] 00:29:34.269 }' 00:29:34.269 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:34.269 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:34.527 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:29:34.527 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.527 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:34.527 [2024-11-05 15:59:06.799023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:34.527 [2024-11-05 15:59:06.799083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:34.527 [2024-11-05 15:59:06.799102] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:29:34.527 [2024-11-05 15:59:06.799108] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:34.527 [2024-11-05 15:59:06.799472] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:34.527 [2024-11-05 15:59:06.799495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:34.527 [2024-11-05 15:59:06.799571] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:34.527 [2024-11-05 15:59:06.799580] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:29:34.527 [2024-11-05 15:59:06.799591] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:34.527 [2024-11-05 15:59:06.799607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:34.527 [2024-11-05 15:59:06.808727] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:29:34.527 spare 00:29:34.527 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.527 15:59:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:29:34.527 [2024-11-05 15:59:06.810279] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:35.460 15:59:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:35.460 15:59:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:35.460 15:59:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:35.460 15:59:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:35.460 15:59:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:35.460 15:59:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:35.460 15:59:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.460 15:59:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:35.460 15:59:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:35.460 15:59:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.460 15:59:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:35.460 "name": "raid_bdev1", 00:29:35.460 "uuid": "6a5b75f6-a1b5-42f7-83dd-6859a58465af", 00:29:35.460 "strip_size_kb": 0, 00:29:35.460 "state": "online", 00:29:35.460 "raid_level": "raid1", 00:29:35.460 "superblock": true, 00:29:35.460 "num_base_bdevs": 2, 00:29:35.460 "num_base_bdevs_discovered": 2, 00:29:35.460 "num_base_bdevs_operational": 2, 00:29:35.460 "process": { 00:29:35.460 "type": "rebuild", 00:29:35.460 "target": "spare", 00:29:35.460 "progress": { 00:29:35.460 "blocks": 20480, 00:29:35.460 "percent": 32 00:29:35.460 } 00:29:35.460 }, 00:29:35.460 "base_bdevs_list": [ 00:29:35.460 { 00:29:35.460 "name": "spare", 00:29:35.460 "uuid": "d7f3e7a4-8ab9-5e8c-83d6-f16aead2fec8", 00:29:35.460 "is_configured": true, 00:29:35.460 "data_offset": 2048, 00:29:35.460 "data_size": 63488 00:29:35.460 }, 00:29:35.460 { 00:29:35.460 "name": "BaseBdev2", 00:29:35.460 "uuid": "b5e9962c-be48-5adc-bac3-06520eb8d5ae", 00:29:35.460 "is_configured": true, 00:29:35.460 "data_offset": 2048, 00:29:35.460 "data_size": 63488 00:29:35.460 } 00:29:35.460 ] 00:29:35.460 }' 00:29:35.460 15:59:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:35.717 15:59:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:35.717 15:59:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:35.717 15:59:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:35.717 15:59:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:29:35.717 15:59:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.717 15:59:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:35.717 [2024-11-05 15:59:07.912611] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:35.717 [2024-11-05 15:59:07.915211] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:35.717 [2024-11-05 15:59:07.915263] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:35.717 [2024-11-05 15:59:07.915275] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:35.717 [2024-11-05 15:59:07.915283] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:35.717 15:59:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.717 15:59:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:35.717 15:59:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:35.717 15:59:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:35.717 15:59:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:35.717 15:59:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:35.717 15:59:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:35.717 15:59:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:35.717 15:59:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:35.717 15:59:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:35.717 15:59:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:35.717 15:59:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:35.717 15:59:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:35.717 15:59:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.717 15:59:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:35.717 15:59:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.717 15:59:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:35.717 "name": "raid_bdev1", 00:29:35.717 "uuid": "6a5b75f6-a1b5-42f7-83dd-6859a58465af", 00:29:35.717 "strip_size_kb": 0, 00:29:35.717 "state": "online", 00:29:35.717 "raid_level": "raid1", 00:29:35.717 "superblock": true, 00:29:35.717 "num_base_bdevs": 2, 00:29:35.717 "num_base_bdevs_discovered": 1, 00:29:35.717 "num_base_bdevs_operational": 1, 00:29:35.717 "base_bdevs_list": [ 00:29:35.717 { 00:29:35.717 "name": null, 00:29:35.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:35.717 "is_configured": false, 00:29:35.717 "data_offset": 0, 00:29:35.717 "data_size": 63488 00:29:35.717 }, 00:29:35.717 { 00:29:35.717 "name": "BaseBdev2", 00:29:35.717 "uuid": "b5e9962c-be48-5adc-bac3-06520eb8d5ae", 00:29:35.717 "is_configured": true, 00:29:35.717 "data_offset": 2048, 00:29:35.717 "data_size": 63488 00:29:35.717 } 00:29:35.717 ] 00:29:35.717 }' 00:29:35.717 15:59:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:35.717 15:59:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:35.975 15:59:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:35.975 15:59:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:35.975 15:59:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:35.975 15:59:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:35.975 15:59:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:35.975 15:59:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:35.975 15:59:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.975 15:59:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:35.975 15:59:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:35.975 15:59:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.975 15:59:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:35.975 "name": "raid_bdev1", 00:29:35.975 "uuid": "6a5b75f6-a1b5-42f7-83dd-6859a58465af", 00:29:35.975 "strip_size_kb": 0, 00:29:35.975 "state": "online", 00:29:35.975 "raid_level": "raid1", 00:29:35.975 "superblock": true, 00:29:35.975 "num_base_bdevs": 2, 00:29:35.975 "num_base_bdevs_discovered": 1, 00:29:35.975 "num_base_bdevs_operational": 1, 00:29:35.975 "base_bdevs_list": [ 00:29:35.975 { 00:29:35.975 "name": null, 00:29:35.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:35.975 "is_configured": false, 00:29:35.975 "data_offset": 0, 00:29:35.975 "data_size": 63488 00:29:35.975 }, 00:29:35.975 { 00:29:35.975 "name": "BaseBdev2", 00:29:35.975 "uuid": "b5e9962c-be48-5adc-bac3-06520eb8d5ae", 00:29:35.975 "is_configured": true, 00:29:35.975 "data_offset": 2048, 00:29:35.975 "data_size": 63488 00:29:35.975 } 00:29:35.975 ] 00:29:35.975 }' 00:29:35.975 15:59:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:35.975 15:59:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:35.975 15:59:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:35.975 15:59:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:35.975 15:59:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:29:35.975 15:59:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.975 15:59:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:35.975 15:59:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.975 15:59:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:35.975 15:59:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.975 15:59:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:35.975 [2024-11-05 15:59:08.367081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:35.975 [2024-11-05 15:59:08.367128] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:35.975 [2024-11-05 15:59:08.367143] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:29:35.975 [2024-11-05 15:59:08.367152] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:35.975 [2024-11-05 15:59:08.367485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:35.975 [2024-11-05 15:59:08.367509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:35.975 [2024-11-05 15:59:08.367566] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:29:35.975 [2024-11-05 15:59:08.367579] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:29:35.975 [2024-11-05 15:59:08.367585] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:35.975 [2024-11-05 15:59:08.367598] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:29:35.975 BaseBdev1 00:29:35.975 15:59:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.975 15:59:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:29:37.346 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:37.346 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:37.346 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:37.346 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:37.346 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:37.346 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:37.346 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:37.346 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:37.346 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:37.346 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:37.346 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:37.346 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:37.346 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.346 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:37.346 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.346 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:37.346 "name": "raid_bdev1", 00:29:37.346 "uuid": "6a5b75f6-a1b5-42f7-83dd-6859a58465af", 00:29:37.346 "strip_size_kb": 0, 00:29:37.346 "state": "online", 00:29:37.346 "raid_level": "raid1", 00:29:37.346 "superblock": true, 00:29:37.346 "num_base_bdevs": 2, 00:29:37.346 "num_base_bdevs_discovered": 1, 00:29:37.346 "num_base_bdevs_operational": 1, 00:29:37.346 "base_bdevs_list": [ 00:29:37.346 { 00:29:37.346 "name": null, 00:29:37.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:37.346 "is_configured": false, 00:29:37.346 "data_offset": 0, 00:29:37.346 "data_size": 63488 00:29:37.346 }, 00:29:37.346 { 00:29:37.346 "name": "BaseBdev2", 00:29:37.346 "uuid": "b5e9962c-be48-5adc-bac3-06520eb8d5ae", 00:29:37.346 "is_configured": true, 00:29:37.346 "data_offset": 2048, 00:29:37.346 "data_size": 63488 00:29:37.346 } 00:29:37.346 ] 00:29:37.346 }' 00:29:37.346 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:37.346 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:37.346 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:37.346 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:37.346 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:37.346 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:37.346 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:37.346 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:37.346 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.346 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:37.346 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:37.346 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.346 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:37.346 "name": "raid_bdev1", 00:29:37.346 "uuid": "6a5b75f6-a1b5-42f7-83dd-6859a58465af", 00:29:37.346 "strip_size_kb": 0, 00:29:37.346 "state": "online", 00:29:37.346 "raid_level": "raid1", 00:29:37.346 "superblock": true, 00:29:37.346 "num_base_bdevs": 2, 00:29:37.346 "num_base_bdevs_discovered": 1, 00:29:37.346 "num_base_bdevs_operational": 1, 00:29:37.346 "base_bdevs_list": [ 00:29:37.346 { 00:29:37.346 "name": null, 00:29:37.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:37.346 "is_configured": false, 00:29:37.346 "data_offset": 0, 00:29:37.346 "data_size": 63488 00:29:37.346 }, 00:29:37.346 { 00:29:37.346 "name": "BaseBdev2", 00:29:37.346 "uuid": "b5e9962c-be48-5adc-bac3-06520eb8d5ae", 00:29:37.346 "is_configured": true, 00:29:37.346 "data_offset": 2048, 00:29:37.346 "data_size": 63488 00:29:37.346 } 00:29:37.346 ] 00:29:37.346 }' 00:29:37.346 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:37.603 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:37.603 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:37.603 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:37.603 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:37.603 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:29:37.603 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:37.603 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:37.603 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:37.603 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:37.603 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:37.603 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:37.603 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.603 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:37.603 [2024-11-05 15:59:09.811523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:37.603 [2024-11-05 15:59:09.811653] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:29:37.603 [2024-11-05 15:59:09.811663] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:37.603 request: 00:29:37.603 { 00:29:37.603 "base_bdev": "BaseBdev1", 00:29:37.603 "raid_bdev": "raid_bdev1", 00:29:37.603 "method": "bdev_raid_add_base_bdev", 00:29:37.603 "req_id": 1 00:29:37.603 } 00:29:37.603 Got JSON-RPC error response 00:29:37.603 response: 00:29:37.603 { 00:29:37.603 "code": -22, 00:29:37.604 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:29:37.604 } 00:29:37.604 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:37.604 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:29:37.604 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:37.604 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:37.604 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:37.604 15:59:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:29:38.543 15:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:38.543 15:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:38.543 15:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:38.543 15:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:38.543 15:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:38.543 15:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:38.543 15:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:38.543 15:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:38.543 15:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:38.543 15:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:38.543 15:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:38.543 15:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:38.543 15:59:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.543 15:59:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:38.543 15:59:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.543 15:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:38.543 "name": "raid_bdev1", 00:29:38.543 "uuid": "6a5b75f6-a1b5-42f7-83dd-6859a58465af", 00:29:38.543 "strip_size_kb": 0, 00:29:38.543 "state": "online", 00:29:38.543 "raid_level": "raid1", 00:29:38.543 "superblock": true, 00:29:38.543 "num_base_bdevs": 2, 00:29:38.543 "num_base_bdevs_discovered": 1, 00:29:38.543 "num_base_bdevs_operational": 1, 00:29:38.543 "base_bdevs_list": [ 00:29:38.543 { 00:29:38.543 "name": null, 00:29:38.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:38.543 "is_configured": false, 00:29:38.543 "data_offset": 0, 00:29:38.543 "data_size": 63488 00:29:38.543 }, 00:29:38.543 { 00:29:38.543 "name": "BaseBdev2", 00:29:38.543 "uuid": "b5e9962c-be48-5adc-bac3-06520eb8d5ae", 00:29:38.543 "is_configured": true, 00:29:38.543 "data_offset": 2048, 00:29:38.543 "data_size": 63488 00:29:38.543 } 00:29:38.543 ] 00:29:38.543 }' 00:29:38.543 15:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:38.543 15:59:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:38.801 15:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:38.801 15:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:38.801 15:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:38.801 15:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:38.801 15:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:38.801 15:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:38.801 15:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:38.801 15:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.801 15:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:38.801 15:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.801 15:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:38.801 "name": "raid_bdev1", 00:29:38.801 "uuid": "6a5b75f6-a1b5-42f7-83dd-6859a58465af", 00:29:38.801 "strip_size_kb": 0, 00:29:38.801 "state": "online", 00:29:38.801 "raid_level": "raid1", 00:29:38.801 "superblock": true, 00:29:38.801 "num_base_bdevs": 2, 00:29:38.801 "num_base_bdevs_discovered": 1, 00:29:38.801 "num_base_bdevs_operational": 1, 00:29:38.801 "base_bdevs_list": [ 00:29:38.801 { 00:29:38.801 "name": null, 00:29:38.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:38.801 "is_configured": false, 00:29:38.801 "data_offset": 0, 00:29:38.801 "data_size": 63488 00:29:38.801 }, 00:29:38.801 { 00:29:38.801 "name": "BaseBdev2", 00:29:38.801 "uuid": "b5e9962c-be48-5adc-bac3-06520eb8d5ae", 00:29:38.801 "is_configured": true, 00:29:38.801 "data_offset": 2048, 00:29:38.801 "data_size": 63488 00:29:38.801 } 00:29:38.801 ] 00:29:38.801 }' 00:29:38.801 15:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:39.060 15:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:39.060 15:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:39.060 15:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:39.060 15:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 74497 00:29:39.060 15:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 74497 ']' 00:29:39.060 15:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 74497 00:29:39.060 15:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:29:39.060 15:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:39.060 15:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74497 00:29:39.060 15:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:39.060 15:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:39.060 killing process with pid 74497 00:29:39.060 15:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74497' 00:29:39.060 Received shutdown signal, test time was about 15.877319 seconds 00:29:39.060 00:29:39.060 Latency(us) 00:29:39.060 [2024-11-05T15:59:11.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:39.060 [2024-11-05T15:59:11.475Z] =================================================================================================================== 00:29:39.060 [2024-11-05T15:59:11.475Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:39.060 15:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 74497 00:29:39.060 [2024-11-05 15:59:11.276536] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:39.060 15:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 74497 00:29:39.060 [2024-11-05 15:59:11.276632] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:39.060 [2024-11-05 15:59:11.276685] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:39.060 [2024-11-05 15:59:11.276699] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:29:39.060 [2024-11-05 15:59:11.389393] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:39.627 15:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:29:39.627 00:29:39.627 real 0m18.101s 00:29:39.627 user 0m22.786s 00:29:39.627 sys 0m1.551s 00:29:39.627 15:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:39.627 ************************************ 00:29:39.627 END TEST raid_rebuild_test_sb_io 00:29:39.627 ************************************ 00:29:39.627 15:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:39.627 15:59:12 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:29:39.627 15:59:12 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:29:39.627 15:59:12 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:29:39.627 15:59:12 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:39.627 15:59:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:39.627 ************************************ 00:29:39.627 START TEST raid_rebuild_test 00:29:39.627 ************************************ 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false false true 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75178 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75178 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 75178 ']' 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:39.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.627 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:39.886 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:39.886 Zero copy mechanism will not be used. 00:29:39.886 [2024-11-05 15:59:12.086099] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:29:39.886 [2024-11-05 15:59:12.086220] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75178 ] 00:29:39.886 [2024-11-05 15:59:12.240420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:40.144 [2024-11-05 15:59:12.322668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.144 [2024-11-05 15:59:12.431955] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:40.144 [2024-11-05 15:59:12.431980] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:40.710 15:59:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:40.710 15:59:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:29:40.710 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:40.710 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:40.710 15:59:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.710 15:59:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.710 BaseBdev1_malloc 00:29:40.710 15:59:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.710 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:40.710 15:59:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.710 15:59:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.710 [2024-11-05 15:59:12.911959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:40.710 [2024-11-05 15:59:12.912018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:40.710 [2024-11-05 15:59:12.912036] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:29:40.710 [2024-11-05 15:59:12.912045] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:40.710 [2024-11-05 15:59:12.913741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:40.710 [2024-11-05 15:59:12.913773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:40.710 BaseBdev1 00:29:40.710 15:59:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.710 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:40.710 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:40.710 15:59:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.710 15:59:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.710 BaseBdev2_malloc 00:29:40.710 15:59:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.710 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:40.710 15:59:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.710 15:59:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.710 [2024-11-05 15:59:12.943124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:40.710 [2024-11-05 15:59:12.943170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:40.710 [2024-11-05 15:59:12.943184] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:29:40.710 [2024-11-05 15:59:12.943192] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:40.710 [2024-11-05 15:59:12.944878] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:40.710 [2024-11-05 15:59:12.944907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:40.710 BaseBdev2 00:29:40.710 15:59:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.710 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:40.710 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:40.710 15:59:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.710 15:59:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.710 BaseBdev3_malloc 00:29:40.710 15:59:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.710 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:29:40.710 15:59:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.710 15:59:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.710 [2024-11-05 15:59:12.987153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:29:40.710 [2024-11-05 15:59:12.987197] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:40.710 [2024-11-05 15:59:12.987213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:29:40.710 [2024-11-05 15:59:12.987222] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:40.710 [2024-11-05 15:59:12.988898] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:40.710 [2024-11-05 15:59:12.988929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:40.710 BaseBdev3 00:29:40.710 15:59:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.710 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:40.710 15:59:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:29:40.710 15:59:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.710 15:59:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.710 BaseBdev4_malloc 00:29:40.710 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.711 15:59:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:29:40.711 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.711 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.711 [2024-11-05 15:59:13.018064] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:29:40.711 [2024-11-05 15:59:13.018101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:40.711 [2024-11-05 15:59:13.018114] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:29:40.711 [2024-11-05 15:59:13.018122] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:40.711 [2024-11-05 15:59:13.019752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:40.711 [2024-11-05 15:59:13.019785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:40.711 BaseBdev4 00:29:40.711 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.711 15:59:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:29:40.711 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.711 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.711 spare_malloc 00:29:40.711 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.711 15:59:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:40.711 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.711 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.711 spare_delay 00:29:40.711 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.711 15:59:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:29:40.711 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.711 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.711 [2024-11-05 15:59:13.056868] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:40.711 [2024-11-05 15:59:13.056908] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:40.711 [2024-11-05 15:59:13.056920] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:29:40.711 [2024-11-05 15:59:13.056929] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:40.711 [2024-11-05 15:59:13.058625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:40.711 [2024-11-05 15:59:13.058655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:40.711 spare 00:29:40.711 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.711 15:59:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:29:40.711 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.711 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.711 [2024-11-05 15:59:13.064913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:40.711 [2024-11-05 15:59:13.066377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:40.711 [2024-11-05 15:59:13.066433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:40.711 [2024-11-05 15:59:13.066472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:40.711 [2024-11-05 15:59:13.066535] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:29:40.711 [2024-11-05 15:59:13.066556] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:29:40.711 [2024-11-05 15:59:13.066755] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:29:40.711 [2024-11-05 15:59:13.066892] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:29:40.711 [2024-11-05 15:59:13.066906] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:29:40.711 [2024-11-05 15:59:13.067020] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:40.711 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.711 15:59:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:29:40.711 15:59:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:40.711 15:59:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:40.711 15:59:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:40.711 15:59:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:40.711 15:59:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:40.711 15:59:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:40.711 15:59:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:40.711 15:59:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:40.711 15:59:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:40.711 15:59:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:40.711 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.711 15:59:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:40.711 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.711 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.711 15:59:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:40.711 "name": "raid_bdev1", 00:29:40.711 "uuid": "707fdef5-fcca-4a90-9402-8550714bf9ca", 00:29:40.711 "strip_size_kb": 0, 00:29:40.711 "state": "online", 00:29:40.711 "raid_level": "raid1", 00:29:40.711 "superblock": false, 00:29:40.711 "num_base_bdevs": 4, 00:29:40.711 "num_base_bdevs_discovered": 4, 00:29:40.711 "num_base_bdevs_operational": 4, 00:29:40.711 "base_bdevs_list": [ 00:29:40.711 { 00:29:40.711 "name": "BaseBdev1", 00:29:40.711 "uuid": "a4392f45-57bc-5d0a-bcfd-6b83b01cdeb0", 00:29:40.711 "is_configured": true, 00:29:40.711 "data_offset": 0, 00:29:40.711 "data_size": 65536 00:29:40.711 }, 00:29:40.711 { 00:29:40.711 "name": "BaseBdev2", 00:29:40.711 "uuid": "af322c48-8711-5e9a-9f63-a04bf57fc54a", 00:29:40.711 "is_configured": true, 00:29:40.711 "data_offset": 0, 00:29:40.711 "data_size": 65536 00:29:40.711 }, 00:29:40.711 { 00:29:40.711 "name": "BaseBdev3", 00:29:40.711 "uuid": "0e1f750c-4e83-5aee-b1ed-b2871530acbc", 00:29:40.711 "is_configured": true, 00:29:40.711 "data_offset": 0, 00:29:40.711 "data_size": 65536 00:29:40.711 }, 00:29:40.711 { 00:29:40.711 "name": "BaseBdev4", 00:29:40.711 "uuid": "e2245b45-0b24-5a2d-a5f8-9fc2ccab026d", 00:29:40.711 "is_configured": true, 00:29:40.711 "data_offset": 0, 00:29:40.711 "data_size": 65536 00:29:40.711 } 00:29:40.711 ] 00:29:40.711 }' 00:29:40.711 15:59:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:40.711 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:29:41.277 [2024-11-05 15:59:13.393248] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:29:41.277 [2024-11-05 15:59:13.637051] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:29:41.277 /dev/nbd0 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:41.277 1+0 records in 00:29:41.277 1+0 records out 00:29:41.277 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000140521 s, 29.1 MB/s 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:41.277 15:59:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:29:41.278 15:59:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:29:41.278 15:59:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:29:46.568 65536+0 records in 00:29:46.568 65536+0 records out 00:29:46.568 33554432 bytes (34 MB, 32 MiB) copied, 5.12481 s, 6.5 MB/s 00:29:46.568 15:59:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:29:46.568 15:59:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:29:46.568 15:59:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:46.568 15:59:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:46.568 15:59:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:29:46.568 15:59:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:46.568 15:59:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:29:46.826 [2024-11-05 15:59:19.001272] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:46.826 15:59:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:46.826 15:59:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:46.826 15:59:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:46.826 15:59:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:46.826 15:59:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:46.826 15:59:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:46.826 15:59:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:46.826 15:59:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:46.826 15:59:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:29:46.826 15:59:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.826 15:59:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:46.826 [2024-11-05 15:59:19.030036] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:46.826 15:59:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.826 15:59:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:46.826 15:59:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:46.826 15:59:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:46.826 15:59:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:46.826 15:59:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:46.826 15:59:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:46.826 15:59:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:46.826 15:59:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:46.826 15:59:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:46.826 15:59:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:46.826 15:59:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:46.826 15:59:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:46.826 15:59:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.826 15:59:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:46.826 15:59:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.826 15:59:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:46.826 "name": "raid_bdev1", 00:29:46.826 "uuid": "707fdef5-fcca-4a90-9402-8550714bf9ca", 00:29:46.826 "strip_size_kb": 0, 00:29:46.826 "state": "online", 00:29:46.826 "raid_level": "raid1", 00:29:46.826 "superblock": false, 00:29:46.826 "num_base_bdevs": 4, 00:29:46.826 "num_base_bdevs_discovered": 3, 00:29:46.826 "num_base_bdevs_operational": 3, 00:29:46.826 "base_bdevs_list": [ 00:29:46.826 { 00:29:46.826 "name": null, 00:29:46.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:46.826 "is_configured": false, 00:29:46.826 "data_offset": 0, 00:29:46.826 "data_size": 65536 00:29:46.826 }, 00:29:46.826 { 00:29:46.826 "name": "BaseBdev2", 00:29:46.826 "uuid": "af322c48-8711-5e9a-9f63-a04bf57fc54a", 00:29:46.826 "is_configured": true, 00:29:46.826 "data_offset": 0, 00:29:46.826 "data_size": 65536 00:29:46.826 }, 00:29:46.826 { 00:29:46.826 "name": "BaseBdev3", 00:29:46.826 "uuid": "0e1f750c-4e83-5aee-b1ed-b2871530acbc", 00:29:46.826 "is_configured": true, 00:29:46.826 "data_offset": 0, 00:29:46.826 "data_size": 65536 00:29:46.826 }, 00:29:46.826 { 00:29:46.826 "name": "BaseBdev4", 00:29:46.826 "uuid": "e2245b45-0b24-5a2d-a5f8-9fc2ccab026d", 00:29:46.826 "is_configured": true, 00:29:46.826 "data_offset": 0, 00:29:46.826 "data_size": 65536 00:29:46.826 } 00:29:46.826 ] 00:29:46.826 }' 00:29:46.826 15:59:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:46.826 15:59:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.085 15:59:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:29:47.085 15:59:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.085 15:59:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.085 [2024-11-05 15:59:19.338110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:47.085 [2024-11-05 15:59:19.346278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:29:47.085 15:59:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.085 15:59:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:29:47.085 [2024-11-05 15:59:19.347838] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:48.019 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:48.019 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:48.019 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:48.019 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:48.019 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:48.019 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:48.019 15:59:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.019 15:59:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:48.019 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:48.019 15:59:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.019 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:48.019 "name": "raid_bdev1", 00:29:48.019 "uuid": "707fdef5-fcca-4a90-9402-8550714bf9ca", 00:29:48.019 "strip_size_kb": 0, 00:29:48.019 "state": "online", 00:29:48.019 "raid_level": "raid1", 00:29:48.019 "superblock": false, 00:29:48.019 "num_base_bdevs": 4, 00:29:48.019 "num_base_bdevs_discovered": 4, 00:29:48.019 "num_base_bdevs_operational": 4, 00:29:48.019 "process": { 00:29:48.019 "type": "rebuild", 00:29:48.019 "target": "spare", 00:29:48.019 "progress": { 00:29:48.019 "blocks": 20480, 00:29:48.019 "percent": 31 00:29:48.019 } 00:29:48.019 }, 00:29:48.019 "base_bdevs_list": [ 00:29:48.019 { 00:29:48.019 "name": "spare", 00:29:48.019 "uuid": "5fce3326-b6dc-52dc-992e-2b369997e061", 00:29:48.019 "is_configured": true, 00:29:48.019 "data_offset": 0, 00:29:48.019 "data_size": 65536 00:29:48.019 }, 00:29:48.019 { 00:29:48.019 "name": "BaseBdev2", 00:29:48.019 "uuid": "af322c48-8711-5e9a-9f63-a04bf57fc54a", 00:29:48.019 "is_configured": true, 00:29:48.019 "data_offset": 0, 00:29:48.019 "data_size": 65536 00:29:48.019 }, 00:29:48.019 { 00:29:48.019 "name": "BaseBdev3", 00:29:48.019 "uuid": "0e1f750c-4e83-5aee-b1ed-b2871530acbc", 00:29:48.019 "is_configured": true, 00:29:48.019 "data_offset": 0, 00:29:48.019 "data_size": 65536 00:29:48.019 }, 00:29:48.019 { 00:29:48.019 "name": "BaseBdev4", 00:29:48.019 "uuid": "e2245b45-0b24-5a2d-a5f8-9fc2ccab026d", 00:29:48.019 "is_configured": true, 00:29:48.019 "data_offset": 0, 00:29:48.019 "data_size": 65536 00:29:48.019 } 00:29:48.019 ] 00:29:48.019 }' 00:29:48.019 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:48.019 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:48.019 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:48.277 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:48.277 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:29:48.277 15:59:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.277 15:59:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:48.277 [2024-11-05 15:59:20.458100] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:48.277 [2024-11-05 15:59:20.553333] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:48.277 [2024-11-05 15:59:20.553400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:48.277 [2024-11-05 15:59:20.553414] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:48.277 [2024-11-05 15:59:20.553422] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:48.277 15:59:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.277 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:48.277 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:48.277 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:48.277 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:48.277 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:48.277 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:48.277 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:48.277 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:48.277 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:48.277 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:48.277 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:48.277 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:48.277 15:59:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.277 15:59:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:48.277 15:59:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.277 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:48.277 "name": "raid_bdev1", 00:29:48.277 "uuid": "707fdef5-fcca-4a90-9402-8550714bf9ca", 00:29:48.277 "strip_size_kb": 0, 00:29:48.277 "state": "online", 00:29:48.277 "raid_level": "raid1", 00:29:48.277 "superblock": false, 00:29:48.277 "num_base_bdevs": 4, 00:29:48.277 "num_base_bdevs_discovered": 3, 00:29:48.277 "num_base_bdevs_operational": 3, 00:29:48.277 "base_bdevs_list": [ 00:29:48.277 { 00:29:48.277 "name": null, 00:29:48.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:48.277 "is_configured": false, 00:29:48.277 "data_offset": 0, 00:29:48.277 "data_size": 65536 00:29:48.277 }, 00:29:48.277 { 00:29:48.277 "name": "BaseBdev2", 00:29:48.277 "uuid": "af322c48-8711-5e9a-9f63-a04bf57fc54a", 00:29:48.277 "is_configured": true, 00:29:48.277 "data_offset": 0, 00:29:48.277 "data_size": 65536 00:29:48.277 }, 00:29:48.277 { 00:29:48.277 "name": "BaseBdev3", 00:29:48.277 "uuid": "0e1f750c-4e83-5aee-b1ed-b2871530acbc", 00:29:48.277 "is_configured": true, 00:29:48.277 "data_offset": 0, 00:29:48.277 "data_size": 65536 00:29:48.277 }, 00:29:48.277 { 00:29:48.277 "name": "BaseBdev4", 00:29:48.277 "uuid": "e2245b45-0b24-5a2d-a5f8-9fc2ccab026d", 00:29:48.277 "is_configured": true, 00:29:48.277 "data_offset": 0, 00:29:48.277 "data_size": 65536 00:29:48.277 } 00:29:48.277 ] 00:29:48.277 }' 00:29:48.277 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:48.277 15:59:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:48.536 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:48.536 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:48.536 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:48.536 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:48.536 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:48.536 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:48.536 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:48.536 15:59:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.536 15:59:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:48.536 15:59:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.536 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:48.536 "name": "raid_bdev1", 00:29:48.536 "uuid": "707fdef5-fcca-4a90-9402-8550714bf9ca", 00:29:48.536 "strip_size_kb": 0, 00:29:48.536 "state": "online", 00:29:48.536 "raid_level": "raid1", 00:29:48.536 "superblock": false, 00:29:48.536 "num_base_bdevs": 4, 00:29:48.536 "num_base_bdevs_discovered": 3, 00:29:48.536 "num_base_bdevs_operational": 3, 00:29:48.536 "base_bdevs_list": [ 00:29:48.536 { 00:29:48.536 "name": null, 00:29:48.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:48.536 "is_configured": false, 00:29:48.536 "data_offset": 0, 00:29:48.536 "data_size": 65536 00:29:48.536 }, 00:29:48.536 { 00:29:48.536 "name": "BaseBdev2", 00:29:48.536 "uuid": "af322c48-8711-5e9a-9f63-a04bf57fc54a", 00:29:48.536 "is_configured": true, 00:29:48.536 "data_offset": 0, 00:29:48.536 "data_size": 65536 00:29:48.536 }, 00:29:48.536 { 00:29:48.536 "name": "BaseBdev3", 00:29:48.536 "uuid": "0e1f750c-4e83-5aee-b1ed-b2871530acbc", 00:29:48.536 "is_configured": true, 00:29:48.536 "data_offset": 0, 00:29:48.536 "data_size": 65536 00:29:48.536 }, 00:29:48.536 { 00:29:48.536 "name": "BaseBdev4", 00:29:48.536 "uuid": "e2245b45-0b24-5a2d-a5f8-9fc2ccab026d", 00:29:48.536 "is_configured": true, 00:29:48.536 "data_offset": 0, 00:29:48.536 "data_size": 65536 00:29:48.536 } 00:29:48.536 ] 00:29:48.536 }' 00:29:48.536 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:48.536 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:48.536 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:48.795 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:48.795 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:29:48.795 15:59:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.795 15:59:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:48.795 [2024-11-05 15:59:20.973352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:48.795 [2024-11-05 15:59:20.980972] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:29:48.795 15:59:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.795 15:59:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:29:48.795 [2024-11-05 15:59:20.982545] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:49.728 15:59:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:49.728 15:59:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:49.728 15:59:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:49.728 15:59:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:49.728 15:59:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:49.728 15:59:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:49.728 15:59:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:49.728 15:59:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.728 15:59:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:49.728 15:59:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.728 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:49.728 "name": "raid_bdev1", 00:29:49.728 "uuid": "707fdef5-fcca-4a90-9402-8550714bf9ca", 00:29:49.728 "strip_size_kb": 0, 00:29:49.728 "state": "online", 00:29:49.728 "raid_level": "raid1", 00:29:49.728 "superblock": false, 00:29:49.728 "num_base_bdevs": 4, 00:29:49.728 "num_base_bdevs_discovered": 4, 00:29:49.728 "num_base_bdevs_operational": 4, 00:29:49.728 "process": { 00:29:49.728 "type": "rebuild", 00:29:49.728 "target": "spare", 00:29:49.728 "progress": { 00:29:49.728 "blocks": 20480, 00:29:49.728 "percent": 31 00:29:49.728 } 00:29:49.728 }, 00:29:49.728 "base_bdevs_list": [ 00:29:49.728 { 00:29:49.728 "name": "spare", 00:29:49.728 "uuid": "5fce3326-b6dc-52dc-992e-2b369997e061", 00:29:49.728 "is_configured": true, 00:29:49.728 "data_offset": 0, 00:29:49.728 "data_size": 65536 00:29:49.728 }, 00:29:49.728 { 00:29:49.728 "name": "BaseBdev2", 00:29:49.728 "uuid": "af322c48-8711-5e9a-9f63-a04bf57fc54a", 00:29:49.728 "is_configured": true, 00:29:49.728 "data_offset": 0, 00:29:49.728 "data_size": 65536 00:29:49.728 }, 00:29:49.728 { 00:29:49.728 "name": "BaseBdev3", 00:29:49.728 "uuid": "0e1f750c-4e83-5aee-b1ed-b2871530acbc", 00:29:49.728 "is_configured": true, 00:29:49.728 "data_offset": 0, 00:29:49.728 "data_size": 65536 00:29:49.728 }, 00:29:49.728 { 00:29:49.728 "name": "BaseBdev4", 00:29:49.728 "uuid": "e2245b45-0b24-5a2d-a5f8-9fc2ccab026d", 00:29:49.728 "is_configured": true, 00:29:49.728 "data_offset": 0, 00:29:49.728 "data_size": 65536 00:29:49.728 } 00:29:49.728 ] 00:29:49.728 }' 00:29:49.728 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:49.728 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:49.729 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:49.729 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:49.729 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:29:49.729 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:29:49.729 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:29:49.729 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:29:49.729 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:29:49.729 15:59:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.729 15:59:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:49.729 [2024-11-05 15:59:22.088754] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:49.987 [2024-11-05 15:59:22.188009] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:29:49.987 15:59:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.987 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:29:49.987 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:29:49.987 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:49.987 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:49.987 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:49.987 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:49.987 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:49.987 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:49.987 15:59:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.987 15:59:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:49.987 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:49.987 15:59:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.987 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:49.987 "name": "raid_bdev1", 00:29:49.987 "uuid": "707fdef5-fcca-4a90-9402-8550714bf9ca", 00:29:49.987 "strip_size_kb": 0, 00:29:49.987 "state": "online", 00:29:49.987 "raid_level": "raid1", 00:29:49.987 "superblock": false, 00:29:49.987 "num_base_bdevs": 4, 00:29:49.987 "num_base_bdevs_discovered": 3, 00:29:49.987 "num_base_bdevs_operational": 3, 00:29:49.987 "process": { 00:29:49.987 "type": "rebuild", 00:29:49.987 "target": "spare", 00:29:49.987 "progress": { 00:29:49.987 "blocks": 24576, 00:29:49.987 "percent": 37 00:29:49.987 } 00:29:49.987 }, 00:29:49.987 "base_bdevs_list": [ 00:29:49.987 { 00:29:49.987 "name": "spare", 00:29:49.987 "uuid": "5fce3326-b6dc-52dc-992e-2b369997e061", 00:29:49.987 "is_configured": true, 00:29:49.987 "data_offset": 0, 00:29:49.987 "data_size": 65536 00:29:49.987 }, 00:29:49.987 { 00:29:49.987 "name": null, 00:29:49.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:49.987 "is_configured": false, 00:29:49.987 "data_offset": 0, 00:29:49.987 "data_size": 65536 00:29:49.987 }, 00:29:49.987 { 00:29:49.987 "name": "BaseBdev3", 00:29:49.987 "uuid": "0e1f750c-4e83-5aee-b1ed-b2871530acbc", 00:29:49.987 "is_configured": true, 00:29:49.987 "data_offset": 0, 00:29:49.987 "data_size": 65536 00:29:49.987 }, 00:29:49.987 { 00:29:49.987 "name": "BaseBdev4", 00:29:49.987 "uuid": "e2245b45-0b24-5a2d-a5f8-9fc2ccab026d", 00:29:49.987 "is_configured": true, 00:29:49.987 "data_offset": 0, 00:29:49.987 "data_size": 65536 00:29:49.987 } 00:29:49.987 ] 00:29:49.987 }' 00:29:49.987 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:49.987 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:49.987 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:49.987 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:49.987 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=338 00:29:49.987 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:49.987 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:49.987 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:49.987 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:49.987 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:49.987 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:49.987 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:49.987 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:49.987 15:59:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.987 15:59:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:49.987 15:59:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.987 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:49.987 "name": "raid_bdev1", 00:29:49.987 "uuid": "707fdef5-fcca-4a90-9402-8550714bf9ca", 00:29:49.987 "strip_size_kb": 0, 00:29:49.987 "state": "online", 00:29:49.987 "raid_level": "raid1", 00:29:49.987 "superblock": false, 00:29:49.987 "num_base_bdevs": 4, 00:29:49.987 "num_base_bdevs_discovered": 3, 00:29:49.987 "num_base_bdevs_operational": 3, 00:29:49.987 "process": { 00:29:49.987 "type": "rebuild", 00:29:49.987 "target": "spare", 00:29:49.987 "progress": { 00:29:49.987 "blocks": 24576, 00:29:49.987 "percent": 37 00:29:49.987 } 00:29:49.987 }, 00:29:49.988 "base_bdevs_list": [ 00:29:49.988 { 00:29:49.988 "name": "spare", 00:29:49.988 "uuid": "5fce3326-b6dc-52dc-992e-2b369997e061", 00:29:49.988 "is_configured": true, 00:29:49.988 "data_offset": 0, 00:29:49.988 "data_size": 65536 00:29:49.988 }, 00:29:49.988 { 00:29:49.988 "name": null, 00:29:49.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:49.988 "is_configured": false, 00:29:49.988 "data_offset": 0, 00:29:49.988 "data_size": 65536 00:29:49.988 }, 00:29:49.988 { 00:29:49.988 "name": "BaseBdev3", 00:29:49.988 "uuid": "0e1f750c-4e83-5aee-b1ed-b2871530acbc", 00:29:49.988 "is_configured": true, 00:29:49.988 "data_offset": 0, 00:29:49.988 "data_size": 65536 00:29:49.988 }, 00:29:49.988 { 00:29:49.988 "name": "BaseBdev4", 00:29:49.988 "uuid": "e2245b45-0b24-5a2d-a5f8-9fc2ccab026d", 00:29:49.988 "is_configured": true, 00:29:49.988 "data_offset": 0, 00:29:49.988 "data_size": 65536 00:29:49.988 } 00:29:49.988 ] 00:29:49.988 }' 00:29:49.988 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:49.988 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:49.988 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:49.988 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:49.988 15:59:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:51.361 15:59:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:51.361 15:59:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:51.361 15:59:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:51.361 15:59:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:51.361 15:59:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:51.361 15:59:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:51.361 15:59:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:51.361 15:59:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:51.361 15:59:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.361 15:59:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:51.361 15:59:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.361 15:59:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:51.361 "name": "raid_bdev1", 00:29:51.361 "uuid": "707fdef5-fcca-4a90-9402-8550714bf9ca", 00:29:51.361 "strip_size_kb": 0, 00:29:51.361 "state": "online", 00:29:51.361 "raid_level": "raid1", 00:29:51.361 "superblock": false, 00:29:51.361 "num_base_bdevs": 4, 00:29:51.361 "num_base_bdevs_discovered": 3, 00:29:51.361 "num_base_bdevs_operational": 3, 00:29:51.361 "process": { 00:29:51.361 "type": "rebuild", 00:29:51.361 "target": "spare", 00:29:51.361 "progress": { 00:29:51.361 "blocks": 47104, 00:29:51.361 "percent": 71 00:29:51.361 } 00:29:51.361 }, 00:29:51.361 "base_bdevs_list": [ 00:29:51.361 { 00:29:51.361 "name": "spare", 00:29:51.361 "uuid": "5fce3326-b6dc-52dc-992e-2b369997e061", 00:29:51.361 "is_configured": true, 00:29:51.361 "data_offset": 0, 00:29:51.361 "data_size": 65536 00:29:51.361 }, 00:29:51.361 { 00:29:51.361 "name": null, 00:29:51.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:51.361 "is_configured": false, 00:29:51.361 "data_offset": 0, 00:29:51.361 "data_size": 65536 00:29:51.361 }, 00:29:51.361 { 00:29:51.361 "name": "BaseBdev3", 00:29:51.361 "uuid": "0e1f750c-4e83-5aee-b1ed-b2871530acbc", 00:29:51.361 "is_configured": true, 00:29:51.361 "data_offset": 0, 00:29:51.361 "data_size": 65536 00:29:51.361 }, 00:29:51.361 { 00:29:51.361 "name": "BaseBdev4", 00:29:51.361 "uuid": "e2245b45-0b24-5a2d-a5f8-9fc2ccab026d", 00:29:51.361 "is_configured": true, 00:29:51.361 "data_offset": 0, 00:29:51.361 "data_size": 65536 00:29:51.361 } 00:29:51.361 ] 00:29:51.361 }' 00:29:51.361 15:59:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:51.361 15:59:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:51.361 15:59:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:51.361 15:59:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:51.361 15:59:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:51.926 [2024-11-05 15:59:24.197353] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:51.926 [2024-11-05 15:59:24.197422] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:51.926 [2024-11-05 15:59:24.197465] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:52.185 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:52.185 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:52.185 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:52.185 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:52.185 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:52.185 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:52.185 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:52.185 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:52.185 15:59:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.185 15:59:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:52.185 15:59:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.185 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:52.185 "name": "raid_bdev1", 00:29:52.185 "uuid": "707fdef5-fcca-4a90-9402-8550714bf9ca", 00:29:52.185 "strip_size_kb": 0, 00:29:52.185 "state": "online", 00:29:52.185 "raid_level": "raid1", 00:29:52.185 "superblock": false, 00:29:52.185 "num_base_bdevs": 4, 00:29:52.185 "num_base_bdevs_discovered": 3, 00:29:52.185 "num_base_bdevs_operational": 3, 00:29:52.185 "base_bdevs_list": [ 00:29:52.185 { 00:29:52.185 "name": "spare", 00:29:52.185 "uuid": "5fce3326-b6dc-52dc-992e-2b369997e061", 00:29:52.185 "is_configured": true, 00:29:52.185 "data_offset": 0, 00:29:52.185 "data_size": 65536 00:29:52.185 }, 00:29:52.185 { 00:29:52.185 "name": null, 00:29:52.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:52.185 "is_configured": false, 00:29:52.185 "data_offset": 0, 00:29:52.185 "data_size": 65536 00:29:52.185 }, 00:29:52.185 { 00:29:52.185 "name": "BaseBdev3", 00:29:52.185 "uuid": "0e1f750c-4e83-5aee-b1ed-b2871530acbc", 00:29:52.185 "is_configured": true, 00:29:52.185 "data_offset": 0, 00:29:52.185 "data_size": 65536 00:29:52.185 }, 00:29:52.185 { 00:29:52.185 "name": "BaseBdev4", 00:29:52.185 "uuid": "e2245b45-0b24-5a2d-a5f8-9fc2ccab026d", 00:29:52.185 "is_configured": true, 00:29:52.185 "data_offset": 0, 00:29:52.185 "data_size": 65536 00:29:52.185 } 00:29:52.185 ] 00:29:52.185 }' 00:29:52.185 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:52.185 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:52.185 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:52.185 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:29:52.185 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:29:52.185 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:52.185 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:52.185 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:52.185 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:52.185 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:52.185 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:52.185 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:52.185 15:59:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.185 15:59:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:52.185 15:59:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.185 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:52.185 "name": "raid_bdev1", 00:29:52.185 "uuid": "707fdef5-fcca-4a90-9402-8550714bf9ca", 00:29:52.185 "strip_size_kb": 0, 00:29:52.185 "state": "online", 00:29:52.185 "raid_level": "raid1", 00:29:52.185 "superblock": false, 00:29:52.185 "num_base_bdevs": 4, 00:29:52.185 "num_base_bdevs_discovered": 3, 00:29:52.185 "num_base_bdevs_operational": 3, 00:29:52.185 "base_bdevs_list": [ 00:29:52.185 { 00:29:52.185 "name": "spare", 00:29:52.185 "uuid": "5fce3326-b6dc-52dc-992e-2b369997e061", 00:29:52.185 "is_configured": true, 00:29:52.185 "data_offset": 0, 00:29:52.185 "data_size": 65536 00:29:52.185 }, 00:29:52.185 { 00:29:52.185 "name": null, 00:29:52.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:52.185 "is_configured": false, 00:29:52.185 "data_offset": 0, 00:29:52.185 "data_size": 65536 00:29:52.185 }, 00:29:52.185 { 00:29:52.185 "name": "BaseBdev3", 00:29:52.185 "uuid": "0e1f750c-4e83-5aee-b1ed-b2871530acbc", 00:29:52.185 "is_configured": true, 00:29:52.185 "data_offset": 0, 00:29:52.185 "data_size": 65536 00:29:52.185 }, 00:29:52.185 { 00:29:52.185 "name": "BaseBdev4", 00:29:52.185 "uuid": "e2245b45-0b24-5a2d-a5f8-9fc2ccab026d", 00:29:52.185 "is_configured": true, 00:29:52.185 "data_offset": 0, 00:29:52.185 "data_size": 65536 00:29:52.185 } 00:29:52.185 ] 00:29:52.185 }' 00:29:52.444 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:52.444 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:52.444 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:52.444 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:52.444 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:52.444 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:52.444 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:52.444 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:52.444 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:52.444 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:52.444 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:52.444 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:52.444 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:52.444 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:52.444 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:52.444 15:59:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.444 15:59:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:52.444 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:52.444 15:59:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.444 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:52.444 "name": "raid_bdev1", 00:29:52.444 "uuid": "707fdef5-fcca-4a90-9402-8550714bf9ca", 00:29:52.444 "strip_size_kb": 0, 00:29:52.444 "state": "online", 00:29:52.444 "raid_level": "raid1", 00:29:52.444 "superblock": false, 00:29:52.444 "num_base_bdevs": 4, 00:29:52.444 "num_base_bdevs_discovered": 3, 00:29:52.444 "num_base_bdevs_operational": 3, 00:29:52.444 "base_bdevs_list": [ 00:29:52.444 { 00:29:52.444 "name": "spare", 00:29:52.444 "uuid": "5fce3326-b6dc-52dc-992e-2b369997e061", 00:29:52.444 "is_configured": true, 00:29:52.444 "data_offset": 0, 00:29:52.444 "data_size": 65536 00:29:52.444 }, 00:29:52.444 { 00:29:52.444 "name": null, 00:29:52.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:52.444 "is_configured": false, 00:29:52.444 "data_offset": 0, 00:29:52.444 "data_size": 65536 00:29:52.444 }, 00:29:52.444 { 00:29:52.444 "name": "BaseBdev3", 00:29:52.444 "uuid": "0e1f750c-4e83-5aee-b1ed-b2871530acbc", 00:29:52.444 "is_configured": true, 00:29:52.444 "data_offset": 0, 00:29:52.444 "data_size": 65536 00:29:52.444 }, 00:29:52.444 { 00:29:52.444 "name": "BaseBdev4", 00:29:52.444 "uuid": "e2245b45-0b24-5a2d-a5f8-9fc2ccab026d", 00:29:52.444 "is_configured": true, 00:29:52.444 "data_offset": 0, 00:29:52.444 "data_size": 65536 00:29:52.444 } 00:29:52.444 ] 00:29:52.444 }' 00:29:52.444 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:52.444 15:59:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:52.702 15:59:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:52.702 15:59:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.702 15:59:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:52.702 [2024-11-05 15:59:24.997987] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:52.702 [2024-11-05 15:59:24.998014] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:52.702 [2024-11-05 15:59:24.998077] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:52.702 [2024-11-05 15:59:24.998140] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:52.702 [2024-11-05 15:59:24.998154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:29:52.702 15:59:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.702 15:59:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:52.702 15:59:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.703 15:59:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:52.703 15:59:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:29:52.703 15:59:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.703 15:59:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:29:52.703 15:59:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:29:52.703 15:59:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:29:52.703 15:59:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:29:52.703 15:59:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:29:52.703 15:59:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:29:52.703 15:59:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:52.703 15:59:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:52.703 15:59:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:52.703 15:59:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:29:52.703 15:59:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:52.703 15:59:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:52.703 15:59:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:29:52.960 /dev/nbd0 00:29:52.960 15:59:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:52.960 15:59:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:52.960 15:59:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:29:52.960 15:59:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:29:52.960 15:59:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:52.960 15:59:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:52.960 15:59:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:29:52.960 15:59:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:29:52.960 15:59:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:52.960 15:59:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:52.960 15:59:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:52.960 1+0 records in 00:29:52.960 1+0 records out 00:29:52.960 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313101 s, 13.1 MB/s 00:29:52.960 15:59:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:52.960 15:59:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:29:52.960 15:59:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:52.960 15:59:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:52.960 15:59:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:29:52.960 15:59:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:52.960 15:59:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:52.960 15:59:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:29:53.217 /dev/nbd1 00:29:53.218 15:59:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:53.218 15:59:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:53.218 15:59:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:29:53.218 15:59:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:29:53.218 15:59:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:53.218 15:59:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:53.218 15:59:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:29:53.218 15:59:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:29:53.218 15:59:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:53.218 15:59:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:53.218 15:59:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:53.218 1+0 records in 00:29:53.218 1+0 records out 00:29:53.218 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002664 s, 15.4 MB/s 00:29:53.218 15:59:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:53.218 15:59:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:29:53.218 15:59:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:53.218 15:59:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:53.218 15:59:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:29:53.218 15:59:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:53.218 15:59:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:53.218 15:59:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:29:53.476 15:59:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:29:53.476 15:59:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:29:53.476 15:59:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:53.476 15:59:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:53.476 15:59:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:29:53.476 15:59:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:53.476 15:59:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:29:53.476 15:59:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:53.476 15:59:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:53.476 15:59:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:53.476 15:59:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:53.476 15:59:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:53.476 15:59:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:53.476 15:59:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:53.476 15:59:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:53.476 15:59:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:53.476 15:59:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:29:53.734 15:59:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:53.734 15:59:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:53.734 15:59:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:53.734 15:59:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:53.734 15:59:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:53.734 15:59:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:53.734 15:59:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:53.734 15:59:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:53.734 15:59:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:29:53.734 15:59:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75178 00:29:53.734 15:59:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 75178 ']' 00:29:53.734 15:59:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 75178 00:29:53.734 15:59:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:29:53.734 15:59:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:53.734 15:59:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75178 00:29:53.734 killing process with pid 75178 00:29:53.734 Received shutdown signal, test time was about 60.000000 seconds 00:29:53.734 00:29:53.734 Latency(us) 00:29:53.734 [2024-11-05T15:59:26.149Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:53.734 [2024-11-05T15:59:26.149Z] =================================================================================================================== 00:29:53.734 [2024-11-05T15:59:26.149Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:53.734 15:59:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:53.734 15:59:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:53.734 15:59:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75178' 00:29:53.734 15:59:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 75178 00:29:53.734 [2024-11-05 15:59:26.137861] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:53.734 15:59:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 75178 00:29:54.300 [2024-11-05 15:59:26.441773] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:54.866 15:59:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:29:54.866 00:29:54.866 real 0m15.133s 00:29:54.866 user 0m16.731s 00:29:54.866 sys 0m2.387s 00:29:54.866 15:59:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:54.866 15:59:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:54.866 ************************************ 00:29:54.866 END TEST raid_rebuild_test 00:29:54.866 ************************************ 00:29:54.866 15:59:27 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:29:54.866 15:59:27 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:29:54.866 15:59:27 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:54.866 15:59:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:54.866 ************************************ 00:29:54.866 START TEST raid_rebuild_test_sb 00:29:54.866 ************************************ 00:29:54.866 15:59:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true false true 00:29:54.866 15:59:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:29:54.866 15:59:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:29:54.866 15:59:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:29:54.866 15:59:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:29:54.866 15:59:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:29:54.866 15:59:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:29:54.866 15:59:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:54.866 15:59:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:29:54.866 15:59:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:54.866 15:59:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:54.866 15:59:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:29:54.866 15:59:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:54.866 15:59:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:54.866 15:59:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:29:54.866 15:59:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:54.866 15:59:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:54.866 15:59:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:29:54.866 15:59:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:54.866 15:59:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:54.866 15:59:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:54.866 15:59:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:29:54.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:54.866 15:59:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:29:54.866 15:59:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:29:54.866 15:59:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:29:54.866 15:59:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:29:54.866 15:59:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:29:54.866 15:59:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:29:54.866 15:59:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:29:54.866 15:59:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:29:54.866 15:59:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:29:54.866 15:59:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75607 00:29:54.866 15:59:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75607 00:29:54.866 15:59:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 75607 ']' 00:29:54.866 15:59:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:54.867 15:59:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:54.867 15:59:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:54.867 15:59:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:54.867 15:59:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:54.867 15:59:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:54.867 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:54.867 Zero copy mechanism will not be used. 00:29:54.867 [2024-11-05 15:59:27.270854] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:29:54.867 [2024-11-05 15:59:27.270970] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75607 ] 00:29:55.125 [2024-11-05 15:59:27.441258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.383 [2024-11-05 15:59:27.571218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:55.383 [2024-11-05 15:59:27.697702] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:55.383 [2024-11-05 15:59:27.697907] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:55.958 BaseBdev1_malloc 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:55.958 [2024-11-05 15:59:28.113853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:55.958 [2024-11-05 15:59:28.113906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:55.958 [2024-11-05 15:59:28.113921] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:29:55.958 [2024-11-05 15:59:28.113930] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:55.958 [2024-11-05 15:59:28.115701] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:55.958 [2024-11-05 15:59:28.115736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:55.958 BaseBdev1 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:55.958 BaseBdev2_malloc 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:55.958 [2024-11-05 15:59:28.145628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:55.958 [2024-11-05 15:59:28.145777] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:55.958 [2024-11-05 15:59:28.145797] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:29:55.958 [2024-11-05 15:59:28.145807] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:55.958 [2024-11-05 15:59:28.147573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:55.958 [2024-11-05 15:59:28.147601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:55.958 BaseBdev2 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:55.958 BaseBdev3_malloc 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:55.958 [2024-11-05 15:59:28.190448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:29:55.958 [2024-11-05 15:59:28.190492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:55.958 [2024-11-05 15:59:28.190508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:29:55.958 [2024-11-05 15:59:28.190517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:55.958 [2024-11-05 15:59:28.192256] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:55.958 [2024-11-05 15:59:28.192287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:55.958 BaseBdev3 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:55.958 BaseBdev4_malloc 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:55.958 [2024-11-05 15:59:28.222227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:29:55.958 [2024-11-05 15:59:28.222266] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:55.958 [2024-11-05 15:59:28.222281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:29:55.958 [2024-11-05 15:59:28.222289] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:55.958 [2024-11-05 15:59:28.224026] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:55.958 [2024-11-05 15:59:28.224055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:55.958 BaseBdev4 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:55.958 spare_malloc 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:55.958 spare_delay 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.958 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:55.958 [2024-11-05 15:59:28.261965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:55.958 [2024-11-05 15:59:28.262006] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:55.958 [2024-11-05 15:59:28.262019] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:29:55.958 [2024-11-05 15:59:28.262028] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:55.959 [2024-11-05 15:59:28.263728] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:55.959 [2024-11-05 15:59:28.263758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:55.959 spare 00:29:55.959 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.959 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:29:55.959 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.959 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:55.959 [2024-11-05 15:59:28.270013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:55.959 [2024-11-05 15:59:28.271505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:55.959 [2024-11-05 15:59:28.271558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:55.959 [2024-11-05 15:59:28.271597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:55.959 [2024-11-05 15:59:28.271739] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:29:55.959 [2024-11-05 15:59:28.271751] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:55.959 [2024-11-05 15:59:28.271954] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:29:55.959 [2024-11-05 15:59:28.272077] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:29:55.959 [2024-11-05 15:59:28.272084] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:29:55.959 [2024-11-05 15:59:28.272187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:55.959 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.959 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:29:55.959 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:55.959 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:55.959 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:55.959 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:55.959 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:55.959 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:55.959 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:55.959 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:55.959 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:55.959 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:55.959 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.959 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:55.959 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:55.959 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.959 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:55.959 "name": "raid_bdev1", 00:29:55.959 "uuid": "071f96d8-a2f3-4522-86f2-b86bec403375", 00:29:55.959 "strip_size_kb": 0, 00:29:55.959 "state": "online", 00:29:55.959 "raid_level": "raid1", 00:29:55.959 "superblock": true, 00:29:55.959 "num_base_bdevs": 4, 00:29:55.959 "num_base_bdevs_discovered": 4, 00:29:55.959 "num_base_bdevs_operational": 4, 00:29:55.959 "base_bdevs_list": [ 00:29:55.959 { 00:29:55.959 "name": "BaseBdev1", 00:29:55.959 "uuid": "cd91dbc2-b9ca-58e7-ab74-4909cc22a6ee", 00:29:55.959 "is_configured": true, 00:29:55.959 "data_offset": 2048, 00:29:55.959 "data_size": 63488 00:29:55.959 }, 00:29:55.959 { 00:29:55.959 "name": "BaseBdev2", 00:29:55.959 "uuid": "a5fe6f97-4c78-51f4-bc11-390dd810a57c", 00:29:55.959 "is_configured": true, 00:29:55.959 "data_offset": 2048, 00:29:55.959 "data_size": 63488 00:29:55.959 }, 00:29:55.959 { 00:29:55.959 "name": "BaseBdev3", 00:29:55.959 "uuid": "6dc76aee-85c8-56b3-99c5-62edc1aa9e02", 00:29:55.959 "is_configured": true, 00:29:55.959 "data_offset": 2048, 00:29:55.959 "data_size": 63488 00:29:55.959 }, 00:29:55.959 { 00:29:55.959 "name": "BaseBdev4", 00:29:55.959 "uuid": "55fc584c-c574-5d62-b632-393fcfd9b3c9", 00:29:55.959 "is_configured": true, 00:29:55.959 "data_offset": 2048, 00:29:55.959 "data_size": 63488 00:29:55.959 } 00:29:55.959 ] 00:29:55.959 }' 00:29:55.959 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:55.959 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:56.240 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:29:56.240 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:56.240 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.240 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:56.240 [2024-11-05 15:59:28.562364] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:56.240 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.240 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:29:56.240 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:56.240 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:56.240 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.240 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:56.240 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.240 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:29:56.240 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:29:56.240 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:29:56.240 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:29:56.240 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:29:56.240 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:29:56.240 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:29:56.240 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:56.240 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:56.240 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:56.240 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:29:56.240 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:56.240 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:56.240 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:29:56.498 [2024-11-05 15:59:28.806157] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:29:56.498 /dev/nbd0 00:29:56.498 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:56.498 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:56.498 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:29:56.498 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:29:56.498 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:56.498 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:56.498 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:29:56.498 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:29:56.498 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:56.498 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:56.498 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:56.498 1+0 records in 00:29:56.498 1+0 records out 00:29:56.498 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224506 s, 18.2 MB/s 00:29:56.498 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:56.498 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:29:56.498 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:56.498 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:56.498 15:59:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:29:56.498 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:56.498 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:56.498 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:29:56.498 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:29:56.498 15:59:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:30:01.752 63488+0 records in 00:30:01.752 63488+0 records out 00:30:01.752 32505856 bytes (33 MB, 31 MiB) copied, 5.19332 s, 6.3 MB/s 00:30:01.752 15:59:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:30:01.752 15:59:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:30:01.752 15:59:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:01.752 15:59:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:01.752 15:59:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:30:01.752 15:59:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:01.752 15:59:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:30:02.010 [2024-11-05 15:59:34.246347] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:02.010 15:59:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:02.010 15:59:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:02.010 15:59:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:02.010 15:59:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:02.010 15:59:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:02.010 15:59:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:02.010 15:59:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:02.010 15:59:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:02.010 15:59:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:30:02.010 15:59:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.010 15:59:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:02.010 [2024-11-05 15:59:34.267268] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:02.010 15:59:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.010 15:59:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:02.010 15:59:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:02.010 15:59:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:02.010 15:59:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:02.010 15:59:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:02.010 15:59:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:02.010 15:59:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:02.010 15:59:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:02.010 15:59:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:02.010 15:59:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:02.010 15:59:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:02.010 15:59:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.010 15:59:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:02.010 15:59:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:02.010 15:59:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.010 15:59:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:02.010 "name": "raid_bdev1", 00:30:02.010 "uuid": "071f96d8-a2f3-4522-86f2-b86bec403375", 00:30:02.010 "strip_size_kb": 0, 00:30:02.010 "state": "online", 00:30:02.010 "raid_level": "raid1", 00:30:02.010 "superblock": true, 00:30:02.010 "num_base_bdevs": 4, 00:30:02.010 "num_base_bdevs_discovered": 3, 00:30:02.010 "num_base_bdevs_operational": 3, 00:30:02.010 "base_bdevs_list": [ 00:30:02.010 { 00:30:02.010 "name": null, 00:30:02.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:02.010 "is_configured": false, 00:30:02.010 "data_offset": 0, 00:30:02.010 "data_size": 63488 00:30:02.010 }, 00:30:02.010 { 00:30:02.010 "name": "BaseBdev2", 00:30:02.010 "uuid": "a5fe6f97-4c78-51f4-bc11-390dd810a57c", 00:30:02.010 "is_configured": true, 00:30:02.010 "data_offset": 2048, 00:30:02.010 "data_size": 63488 00:30:02.010 }, 00:30:02.010 { 00:30:02.010 "name": "BaseBdev3", 00:30:02.010 "uuid": "6dc76aee-85c8-56b3-99c5-62edc1aa9e02", 00:30:02.010 "is_configured": true, 00:30:02.010 "data_offset": 2048, 00:30:02.010 "data_size": 63488 00:30:02.010 }, 00:30:02.010 { 00:30:02.010 "name": "BaseBdev4", 00:30:02.010 "uuid": "55fc584c-c574-5d62-b632-393fcfd9b3c9", 00:30:02.010 "is_configured": true, 00:30:02.010 "data_offset": 2048, 00:30:02.010 "data_size": 63488 00:30:02.010 } 00:30:02.010 ] 00:30:02.010 }' 00:30:02.010 15:59:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:02.010 15:59:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:02.267 15:59:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:30:02.267 15:59:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.267 15:59:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:02.267 [2024-11-05 15:59:34.583338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:02.267 [2024-11-05 15:59:34.591525] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:30:02.267 15:59:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.267 15:59:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:30:02.267 [2024-11-05 15:59:34.593180] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:03.200 15:59:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:03.200 15:59:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:03.200 15:59:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:03.200 15:59:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:03.200 15:59:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:03.200 15:59:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:03.200 15:59:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:03.200 15:59:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.200 15:59:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:03.200 15:59:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.458 15:59:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:03.458 "name": "raid_bdev1", 00:30:03.458 "uuid": "071f96d8-a2f3-4522-86f2-b86bec403375", 00:30:03.458 "strip_size_kb": 0, 00:30:03.458 "state": "online", 00:30:03.458 "raid_level": "raid1", 00:30:03.458 "superblock": true, 00:30:03.458 "num_base_bdevs": 4, 00:30:03.458 "num_base_bdevs_discovered": 4, 00:30:03.458 "num_base_bdevs_operational": 4, 00:30:03.458 "process": { 00:30:03.458 "type": "rebuild", 00:30:03.458 "target": "spare", 00:30:03.458 "progress": { 00:30:03.458 "blocks": 20480, 00:30:03.458 "percent": 32 00:30:03.458 } 00:30:03.458 }, 00:30:03.458 "base_bdevs_list": [ 00:30:03.458 { 00:30:03.458 "name": "spare", 00:30:03.458 "uuid": "cee1f1ff-d835-5a99-99be-fc841ed6544f", 00:30:03.458 "is_configured": true, 00:30:03.458 "data_offset": 2048, 00:30:03.458 "data_size": 63488 00:30:03.458 }, 00:30:03.458 { 00:30:03.458 "name": "BaseBdev2", 00:30:03.458 "uuid": "a5fe6f97-4c78-51f4-bc11-390dd810a57c", 00:30:03.458 "is_configured": true, 00:30:03.458 "data_offset": 2048, 00:30:03.458 "data_size": 63488 00:30:03.458 }, 00:30:03.458 { 00:30:03.458 "name": "BaseBdev3", 00:30:03.458 "uuid": "6dc76aee-85c8-56b3-99c5-62edc1aa9e02", 00:30:03.458 "is_configured": true, 00:30:03.458 "data_offset": 2048, 00:30:03.458 "data_size": 63488 00:30:03.458 }, 00:30:03.458 { 00:30:03.458 "name": "BaseBdev4", 00:30:03.458 "uuid": "55fc584c-c574-5d62-b632-393fcfd9b3c9", 00:30:03.458 "is_configured": true, 00:30:03.458 "data_offset": 2048, 00:30:03.458 "data_size": 63488 00:30:03.458 } 00:30:03.458 ] 00:30:03.458 }' 00:30:03.458 15:59:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:03.458 15:59:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:03.458 15:59:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:03.458 15:59:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:03.458 15:59:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:30:03.458 15:59:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.458 15:59:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:03.458 [2024-11-05 15:59:35.695023] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:03.458 [2024-11-05 15:59:35.698232] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:03.458 [2024-11-05 15:59:35.698282] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:03.458 [2024-11-05 15:59:35.698296] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:03.458 [2024-11-05 15:59:35.698303] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:03.458 15:59:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.458 15:59:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:03.458 15:59:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:03.458 15:59:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:03.458 15:59:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:03.458 15:59:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:03.458 15:59:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:03.458 15:59:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:03.458 15:59:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:03.458 15:59:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:03.458 15:59:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:03.458 15:59:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:03.458 15:59:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:03.458 15:59:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.458 15:59:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:03.458 15:59:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.458 15:59:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:03.458 "name": "raid_bdev1", 00:30:03.458 "uuid": "071f96d8-a2f3-4522-86f2-b86bec403375", 00:30:03.458 "strip_size_kb": 0, 00:30:03.458 "state": "online", 00:30:03.458 "raid_level": "raid1", 00:30:03.458 "superblock": true, 00:30:03.458 "num_base_bdevs": 4, 00:30:03.458 "num_base_bdevs_discovered": 3, 00:30:03.458 "num_base_bdevs_operational": 3, 00:30:03.458 "base_bdevs_list": [ 00:30:03.458 { 00:30:03.458 "name": null, 00:30:03.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:03.458 "is_configured": false, 00:30:03.458 "data_offset": 0, 00:30:03.458 "data_size": 63488 00:30:03.458 }, 00:30:03.458 { 00:30:03.458 "name": "BaseBdev2", 00:30:03.458 "uuid": "a5fe6f97-4c78-51f4-bc11-390dd810a57c", 00:30:03.458 "is_configured": true, 00:30:03.458 "data_offset": 2048, 00:30:03.458 "data_size": 63488 00:30:03.458 }, 00:30:03.458 { 00:30:03.458 "name": "BaseBdev3", 00:30:03.458 "uuid": "6dc76aee-85c8-56b3-99c5-62edc1aa9e02", 00:30:03.458 "is_configured": true, 00:30:03.458 "data_offset": 2048, 00:30:03.458 "data_size": 63488 00:30:03.458 }, 00:30:03.458 { 00:30:03.458 "name": "BaseBdev4", 00:30:03.458 "uuid": "55fc584c-c574-5d62-b632-393fcfd9b3c9", 00:30:03.458 "is_configured": true, 00:30:03.458 "data_offset": 2048, 00:30:03.458 "data_size": 63488 00:30:03.458 } 00:30:03.458 ] 00:30:03.458 }' 00:30:03.458 15:59:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:03.458 15:59:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:03.716 15:59:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:03.716 15:59:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:03.716 15:59:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:03.716 15:59:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:03.716 15:59:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:03.716 15:59:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:03.716 15:59:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.716 15:59:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:03.716 15:59:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:03.716 15:59:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.716 15:59:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:03.716 "name": "raid_bdev1", 00:30:03.716 "uuid": "071f96d8-a2f3-4522-86f2-b86bec403375", 00:30:03.716 "strip_size_kb": 0, 00:30:03.716 "state": "online", 00:30:03.716 "raid_level": "raid1", 00:30:03.716 "superblock": true, 00:30:03.716 "num_base_bdevs": 4, 00:30:03.716 "num_base_bdevs_discovered": 3, 00:30:03.716 "num_base_bdevs_operational": 3, 00:30:03.716 "base_bdevs_list": [ 00:30:03.716 { 00:30:03.716 "name": null, 00:30:03.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:03.716 "is_configured": false, 00:30:03.716 "data_offset": 0, 00:30:03.716 "data_size": 63488 00:30:03.716 }, 00:30:03.716 { 00:30:03.716 "name": "BaseBdev2", 00:30:03.716 "uuid": "a5fe6f97-4c78-51f4-bc11-390dd810a57c", 00:30:03.716 "is_configured": true, 00:30:03.716 "data_offset": 2048, 00:30:03.716 "data_size": 63488 00:30:03.716 }, 00:30:03.716 { 00:30:03.716 "name": "BaseBdev3", 00:30:03.716 "uuid": "6dc76aee-85c8-56b3-99c5-62edc1aa9e02", 00:30:03.716 "is_configured": true, 00:30:03.716 "data_offset": 2048, 00:30:03.716 "data_size": 63488 00:30:03.716 }, 00:30:03.716 { 00:30:03.716 "name": "BaseBdev4", 00:30:03.716 "uuid": "55fc584c-c574-5d62-b632-393fcfd9b3c9", 00:30:03.716 "is_configured": true, 00:30:03.716 "data_offset": 2048, 00:30:03.716 "data_size": 63488 00:30:03.716 } 00:30:03.716 ] 00:30:03.716 }' 00:30:03.716 15:59:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:03.716 15:59:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:03.716 15:59:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:03.716 15:59:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:03.716 15:59:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:30:03.716 15:59:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.716 15:59:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:03.716 [2024-11-05 15:59:36.074780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:03.716 [2024-11-05 15:59:36.082572] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:30:03.716 15:59:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.716 15:59:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:30:03.716 [2024-11-05 15:59:36.084169] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:05.088 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:05.088 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:05.088 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:05.088 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:05.088 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:05.088 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:05.088 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:05.088 15:59:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.088 15:59:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:05.088 15:59:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.088 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:05.088 "name": "raid_bdev1", 00:30:05.088 "uuid": "071f96d8-a2f3-4522-86f2-b86bec403375", 00:30:05.088 "strip_size_kb": 0, 00:30:05.088 "state": "online", 00:30:05.088 "raid_level": "raid1", 00:30:05.088 "superblock": true, 00:30:05.088 "num_base_bdevs": 4, 00:30:05.088 "num_base_bdevs_discovered": 4, 00:30:05.088 "num_base_bdevs_operational": 4, 00:30:05.088 "process": { 00:30:05.088 "type": "rebuild", 00:30:05.088 "target": "spare", 00:30:05.088 "progress": { 00:30:05.088 "blocks": 20480, 00:30:05.088 "percent": 32 00:30:05.088 } 00:30:05.088 }, 00:30:05.088 "base_bdevs_list": [ 00:30:05.088 { 00:30:05.088 "name": "spare", 00:30:05.088 "uuid": "cee1f1ff-d835-5a99-99be-fc841ed6544f", 00:30:05.088 "is_configured": true, 00:30:05.088 "data_offset": 2048, 00:30:05.088 "data_size": 63488 00:30:05.088 }, 00:30:05.088 { 00:30:05.088 "name": "BaseBdev2", 00:30:05.088 "uuid": "a5fe6f97-4c78-51f4-bc11-390dd810a57c", 00:30:05.088 "is_configured": true, 00:30:05.088 "data_offset": 2048, 00:30:05.088 "data_size": 63488 00:30:05.088 }, 00:30:05.088 { 00:30:05.088 "name": "BaseBdev3", 00:30:05.088 "uuid": "6dc76aee-85c8-56b3-99c5-62edc1aa9e02", 00:30:05.088 "is_configured": true, 00:30:05.088 "data_offset": 2048, 00:30:05.088 "data_size": 63488 00:30:05.088 }, 00:30:05.088 { 00:30:05.088 "name": "BaseBdev4", 00:30:05.088 "uuid": "55fc584c-c574-5d62-b632-393fcfd9b3c9", 00:30:05.088 "is_configured": true, 00:30:05.088 "data_offset": 2048, 00:30:05.088 "data_size": 63488 00:30:05.088 } 00:30:05.088 ] 00:30:05.088 }' 00:30:05.088 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:05.088 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:05.088 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:05.088 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:05.088 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:30:05.088 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:30:05.088 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:30:05.088 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:30:05.088 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:30:05.088 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:30:05.088 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:30:05.088 15:59:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.088 15:59:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:05.088 [2024-11-05 15:59:37.185992] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:05.088 [2024-11-05 15:59:37.289197] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:30:05.088 15:59:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.088 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:30:05.088 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:30:05.088 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:05.088 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:05.088 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:05.088 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:05.088 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:05.088 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:05.088 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:05.089 15:59:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.089 15:59:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:05.089 15:59:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.089 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:05.089 "name": "raid_bdev1", 00:30:05.089 "uuid": "071f96d8-a2f3-4522-86f2-b86bec403375", 00:30:05.089 "strip_size_kb": 0, 00:30:05.089 "state": "online", 00:30:05.089 "raid_level": "raid1", 00:30:05.089 "superblock": true, 00:30:05.089 "num_base_bdevs": 4, 00:30:05.089 "num_base_bdevs_discovered": 3, 00:30:05.089 "num_base_bdevs_operational": 3, 00:30:05.089 "process": { 00:30:05.089 "type": "rebuild", 00:30:05.089 "target": "spare", 00:30:05.089 "progress": { 00:30:05.089 "blocks": 22528, 00:30:05.089 "percent": 35 00:30:05.089 } 00:30:05.089 }, 00:30:05.089 "base_bdevs_list": [ 00:30:05.089 { 00:30:05.089 "name": "spare", 00:30:05.089 "uuid": "cee1f1ff-d835-5a99-99be-fc841ed6544f", 00:30:05.089 "is_configured": true, 00:30:05.089 "data_offset": 2048, 00:30:05.089 "data_size": 63488 00:30:05.089 }, 00:30:05.089 { 00:30:05.089 "name": null, 00:30:05.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:05.089 "is_configured": false, 00:30:05.089 "data_offset": 0, 00:30:05.089 "data_size": 63488 00:30:05.089 }, 00:30:05.089 { 00:30:05.089 "name": "BaseBdev3", 00:30:05.089 "uuid": "6dc76aee-85c8-56b3-99c5-62edc1aa9e02", 00:30:05.089 "is_configured": true, 00:30:05.089 "data_offset": 2048, 00:30:05.089 "data_size": 63488 00:30:05.089 }, 00:30:05.089 { 00:30:05.089 "name": "BaseBdev4", 00:30:05.089 "uuid": "55fc584c-c574-5d62-b632-393fcfd9b3c9", 00:30:05.089 "is_configured": true, 00:30:05.089 "data_offset": 2048, 00:30:05.089 "data_size": 63488 00:30:05.089 } 00:30:05.089 ] 00:30:05.089 }' 00:30:05.089 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:05.089 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:05.089 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:05.089 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:05.089 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=353 00:30:05.089 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:05.089 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:05.089 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:05.089 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:05.089 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:05.089 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:05.089 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:05.089 15:59:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.089 15:59:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:05.089 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:05.089 15:59:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.089 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:05.089 "name": "raid_bdev1", 00:30:05.089 "uuid": "071f96d8-a2f3-4522-86f2-b86bec403375", 00:30:05.089 "strip_size_kb": 0, 00:30:05.089 "state": "online", 00:30:05.089 "raid_level": "raid1", 00:30:05.089 "superblock": true, 00:30:05.089 "num_base_bdevs": 4, 00:30:05.089 "num_base_bdevs_discovered": 3, 00:30:05.089 "num_base_bdevs_operational": 3, 00:30:05.089 "process": { 00:30:05.089 "type": "rebuild", 00:30:05.089 "target": "spare", 00:30:05.089 "progress": { 00:30:05.089 "blocks": 24576, 00:30:05.089 "percent": 38 00:30:05.089 } 00:30:05.089 }, 00:30:05.089 "base_bdevs_list": [ 00:30:05.089 { 00:30:05.089 "name": "spare", 00:30:05.089 "uuid": "cee1f1ff-d835-5a99-99be-fc841ed6544f", 00:30:05.089 "is_configured": true, 00:30:05.089 "data_offset": 2048, 00:30:05.089 "data_size": 63488 00:30:05.089 }, 00:30:05.089 { 00:30:05.089 "name": null, 00:30:05.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:05.089 "is_configured": false, 00:30:05.089 "data_offset": 0, 00:30:05.089 "data_size": 63488 00:30:05.089 }, 00:30:05.089 { 00:30:05.089 "name": "BaseBdev3", 00:30:05.089 "uuid": "6dc76aee-85c8-56b3-99c5-62edc1aa9e02", 00:30:05.089 "is_configured": true, 00:30:05.089 "data_offset": 2048, 00:30:05.089 "data_size": 63488 00:30:05.089 }, 00:30:05.089 { 00:30:05.089 "name": "BaseBdev4", 00:30:05.089 "uuid": "55fc584c-c574-5d62-b632-393fcfd9b3c9", 00:30:05.089 "is_configured": true, 00:30:05.089 "data_offset": 2048, 00:30:05.089 "data_size": 63488 00:30:05.089 } 00:30:05.089 ] 00:30:05.089 }' 00:30:05.089 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:05.089 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:05.089 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:05.089 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:05.089 15:59:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:30:06.491 15:59:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:06.491 15:59:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:06.491 15:59:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:06.491 15:59:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:06.491 15:59:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:06.491 15:59:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:06.491 15:59:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:06.491 15:59:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:06.491 15:59:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.491 15:59:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:06.491 15:59:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.491 15:59:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:06.491 "name": "raid_bdev1", 00:30:06.491 "uuid": "071f96d8-a2f3-4522-86f2-b86bec403375", 00:30:06.491 "strip_size_kb": 0, 00:30:06.491 "state": "online", 00:30:06.491 "raid_level": "raid1", 00:30:06.491 "superblock": true, 00:30:06.491 "num_base_bdevs": 4, 00:30:06.491 "num_base_bdevs_discovered": 3, 00:30:06.491 "num_base_bdevs_operational": 3, 00:30:06.491 "process": { 00:30:06.491 "type": "rebuild", 00:30:06.491 "target": "spare", 00:30:06.491 "progress": { 00:30:06.491 "blocks": 45056, 00:30:06.491 "percent": 70 00:30:06.491 } 00:30:06.491 }, 00:30:06.491 "base_bdevs_list": [ 00:30:06.491 { 00:30:06.491 "name": "spare", 00:30:06.491 "uuid": "cee1f1ff-d835-5a99-99be-fc841ed6544f", 00:30:06.491 "is_configured": true, 00:30:06.491 "data_offset": 2048, 00:30:06.491 "data_size": 63488 00:30:06.491 }, 00:30:06.491 { 00:30:06.491 "name": null, 00:30:06.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:06.491 "is_configured": false, 00:30:06.491 "data_offset": 0, 00:30:06.491 "data_size": 63488 00:30:06.491 }, 00:30:06.491 { 00:30:06.491 "name": "BaseBdev3", 00:30:06.491 "uuid": "6dc76aee-85c8-56b3-99c5-62edc1aa9e02", 00:30:06.491 "is_configured": true, 00:30:06.491 "data_offset": 2048, 00:30:06.491 "data_size": 63488 00:30:06.491 }, 00:30:06.491 { 00:30:06.491 "name": "BaseBdev4", 00:30:06.492 "uuid": "55fc584c-c574-5d62-b632-393fcfd9b3c9", 00:30:06.492 "is_configured": true, 00:30:06.492 "data_offset": 2048, 00:30:06.492 "data_size": 63488 00:30:06.492 } 00:30:06.492 ] 00:30:06.492 }' 00:30:06.492 15:59:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:06.492 15:59:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:06.492 15:59:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:06.492 15:59:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:06.492 15:59:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:30:07.056 [2024-11-05 15:59:39.298273] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:07.056 [2024-11-05 15:59:39.298342] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:07.056 [2024-11-05 15:59:39.298448] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:07.313 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:07.313 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:07.313 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:07.313 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:07.313 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:07.313 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:07.313 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:07.313 15:59:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.313 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:07.313 15:59:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.313 15:59:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.313 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:07.313 "name": "raid_bdev1", 00:30:07.313 "uuid": "071f96d8-a2f3-4522-86f2-b86bec403375", 00:30:07.313 "strip_size_kb": 0, 00:30:07.313 "state": "online", 00:30:07.313 "raid_level": "raid1", 00:30:07.313 "superblock": true, 00:30:07.313 "num_base_bdevs": 4, 00:30:07.313 "num_base_bdevs_discovered": 3, 00:30:07.313 "num_base_bdevs_operational": 3, 00:30:07.313 "base_bdevs_list": [ 00:30:07.313 { 00:30:07.313 "name": "spare", 00:30:07.313 "uuid": "cee1f1ff-d835-5a99-99be-fc841ed6544f", 00:30:07.313 "is_configured": true, 00:30:07.313 "data_offset": 2048, 00:30:07.313 "data_size": 63488 00:30:07.313 }, 00:30:07.313 { 00:30:07.313 "name": null, 00:30:07.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:07.313 "is_configured": false, 00:30:07.313 "data_offset": 0, 00:30:07.313 "data_size": 63488 00:30:07.313 }, 00:30:07.313 { 00:30:07.313 "name": "BaseBdev3", 00:30:07.313 "uuid": "6dc76aee-85c8-56b3-99c5-62edc1aa9e02", 00:30:07.313 "is_configured": true, 00:30:07.313 "data_offset": 2048, 00:30:07.313 "data_size": 63488 00:30:07.313 }, 00:30:07.313 { 00:30:07.313 "name": "BaseBdev4", 00:30:07.313 "uuid": "55fc584c-c574-5d62-b632-393fcfd9b3c9", 00:30:07.313 "is_configured": true, 00:30:07.313 "data_offset": 2048, 00:30:07.313 "data_size": 63488 00:30:07.313 } 00:30:07.313 ] 00:30:07.313 }' 00:30:07.313 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:07.313 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:07.313 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:07.313 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:30:07.313 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:30:07.313 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:07.313 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:07.313 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:07.313 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:07.313 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:07.314 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:07.314 15:59:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.314 15:59:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.314 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:07.314 15:59:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.314 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:07.314 "name": "raid_bdev1", 00:30:07.314 "uuid": "071f96d8-a2f3-4522-86f2-b86bec403375", 00:30:07.314 "strip_size_kb": 0, 00:30:07.314 "state": "online", 00:30:07.314 "raid_level": "raid1", 00:30:07.314 "superblock": true, 00:30:07.314 "num_base_bdevs": 4, 00:30:07.314 "num_base_bdevs_discovered": 3, 00:30:07.314 "num_base_bdevs_operational": 3, 00:30:07.314 "base_bdevs_list": [ 00:30:07.314 { 00:30:07.314 "name": "spare", 00:30:07.314 "uuid": "cee1f1ff-d835-5a99-99be-fc841ed6544f", 00:30:07.314 "is_configured": true, 00:30:07.314 "data_offset": 2048, 00:30:07.314 "data_size": 63488 00:30:07.314 }, 00:30:07.314 { 00:30:07.314 "name": null, 00:30:07.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:07.314 "is_configured": false, 00:30:07.314 "data_offset": 0, 00:30:07.314 "data_size": 63488 00:30:07.314 }, 00:30:07.314 { 00:30:07.314 "name": "BaseBdev3", 00:30:07.314 "uuid": "6dc76aee-85c8-56b3-99c5-62edc1aa9e02", 00:30:07.314 "is_configured": true, 00:30:07.314 "data_offset": 2048, 00:30:07.314 "data_size": 63488 00:30:07.314 }, 00:30:07.314 { 00:30:07.314 "name": "BaseBdev4", 00:30:07.314 "uuid": "55fc584c-c574-5d62-b632-393fcfd9b3c9", 00:30:07.314 "is_configured": true, 00:30:07.314 "data_offset": 2048, 00:30:07.314 "data_size": 63488 00:30:07.314 } 00:30:07.314 ] 00:30:07.314 }' 00:30:07.314 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:07.571 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:07.571 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:07.571 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:07.571 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:07.571 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:07.571 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:07.571 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:07.571 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:07.571 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:07.571 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:07.571 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:07.571 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:07.571 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:07.571 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:07.571 15:59:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.571 15:59:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.571 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:07.571 15:59:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.571 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:07.571 "name": "raid_bdev1", 00:30:07.571 "uuid": "071f96d8-a2f3-4522-86f2-b86bec403375", 00:30:07.571 "strip_size_kb": 0, 00:30:07.571 "state": "online", 00:30:07.571 "raid_level": "raid1", 00:30:07.571 "superblock": true, 00:30:07.571 "num_base_bdevs": 4, 00:30:07.571 "num_base_bdevs_discovered": 3, 00:30:07.571 "num_base_bdevs_operational": 3, 00:30:07.571 "base_bdevs_list": [ 00:30:07.571 { 00:30:07.571 "name": "spare", 00:30:07.571 "uuid": "cee1f1ff-d835-5a99-99be-fc841ed6544f", 00:30:07.571 "is_configured": true, 00:30:07.571 "data_offset": 2048, 00:30:07.571 "data_size": 63488 00:30:07.571 }, 00:30:07.571 { 00:30:07.571 "name": null, 00:30:07.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:07.571 "is_configured": false, 00:30:07.571 "data_offset": 0, 00:30:07.571 "data_size": 63488 00:30:07.571 }, 00:30:07.571 { 00:30:07.572 "name": "BaseBdev3", 00:30:07.572 "uuid": "6dc76aee-85c8-56b3-99c5-62edc1aa9e02", 00:30:07.572 "is_configured": true, 00:30:07.572 "data_offset": 2048, 00:30:07.572 "data_size": 63488 00:30:07.572 }, 00:30:07.572 { 00:30:07.572 "name": "BaseBdev4", 00:30:07.572 "uuid": "55fc584c-c574-5d62-b632-393fcfd9b3c9", 00:30:07.572 "is_configured": true, 00:30:07.572 "data_offset": 2048, 00:30:07.572 "data_size": 63488 00:30:07.572 } 00:30:07.572 ] 00:30:07.572 }' 00:30:07.572 15:59:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:07.572 15:59:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.830 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:07.830 15:59:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.830 15:59:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.830 [2024-11-05 15:59:40.074808] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:07.830 [2024-11-05 15:59:40.074834] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:07.830 [2024-11-05 15:59:40.074912] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:07.830 [2024-11-05 15:59:40.074983] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:07.830 [2024-11-05 15:59:40.074991] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:30:07.830 15:59:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.830 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:07.830 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:30:07.830 15:59:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.830 15:59:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.830 15:59:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.830 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:30:07.830 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:30:07.830 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:30:07.830 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:30:07.830 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:30:07.830 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:30:07.830 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:07.830 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:07.830 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:07.830 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:30:07.830 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:07.830 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:07.830 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:30:08.087 /dev/nbd0 00:30:08.087 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:08.087 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:08.087 15:59:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:30:08.087 15:59:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:30:08.087 15:59:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:30:08.087 15:59:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:30:08.087 15:59:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:30:08.087 15:59:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:30:08.087 15:59:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:30:08.087 15:59:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:30:08.087 15:59:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:08.087 1+0 records in 00:30:08.087 1+0 records out 00:30:08.087 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300869 s, 13.6 MB/s 00:30:08.087 15:59:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:08.087 15:59:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:30:08.087 15:59:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:08.087 15:59:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:30:08.087 15:59:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:30:08.087 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:08.087 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:08.087 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:30:08.344 /dev/nbd1 00:30:08.344 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:08.344 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:08.344 15:59:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:30:08.344 15:59:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:30:08.344 15:59:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:30:08.344 15:59:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:30:08.344 15:59:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:30:08.345 15:59:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:30:08.345 15:59:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:30:08.345 15:59:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:30:08.345 15:59:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:08.345 1+0 records in 00:30:08.345 1+0 records out 00:30:08.345 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301638 s, 13.6 MB/s 00:30:08.345 15:59:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:08.345 15:59:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:30:08.345 15:59:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:08.345 15:59:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:30:08.345 15:59:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:30:08.345 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:08.345 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:08.345 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:30:08.345 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:30:08.345 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:30:08.345 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:08.345 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:08.345 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:30:08.345 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:08.345 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:30:08.603 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:08.603 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:08.603 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:08.603 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:08.603 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:08.603 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:08.603 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:08.603 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:08.603 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:08.603 15:59:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:30:08.870 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:08.870 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:08.870 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:08.870 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:08.870 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:08.870 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:08.870 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:08.870 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:08.870 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:30:08.870 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:30:08.870 15:59:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.870 15:59:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:08.870 15:59:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.870 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:30:08.870 15:59:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.870 15:59:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:08.870 [2024-11-05 15:59:41.044253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:08.870 [2024-11-05 15:59:41.044389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:08.870 [2024-11-05 15:59:41.044414] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:30:08.870 [2024-11-05 15:59:41.044422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:08.870 [2024-11-05 15:59:41.046244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:08.870 [2024-11-05 15:59:41.046275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:08.870 [2024-11-05 15:59:41.046346] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:08.870 [2024-11-05 15:59:41.046388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:08.870 [2024-11-05 15:59:41.046492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:08.870 [2024-11-05 15:59:41.046586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:08.870 spare 00:30:08.870 15:59:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.870 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:30:08.870 15:59:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.870 15:59:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:08.870 [2024-11-05 15:59:41.146656] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:30:08.870 [2024-11-05 15:59:41.146676] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:08.871 [2024-11-05 15:59:41.146943] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:30:08.871 [2024-11-05 15:59:41.147087] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:30:08.871 [2024-11-05 15:59:41.147096] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:30:08.871 [2024-11-05 15:59:41.147221] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:08.871 15:59:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.871 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:08.871 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:08.871 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:08.871 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:08.871 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:08.871 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:08.871 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:08.871 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:08.871 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:08.871 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:08.871 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:08.871 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:08.871 15:59:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.871 15:59:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:08.871 15:59:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.871 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:08.871 "name": "raid_bdev1", 00:30:08.871 "uuid": "071f96d8-a2f3-4522-86f2-b86bec403375", 00:30:08.871 "strip_size_kb": 0, 00:30:08.871 "state": "online", 00:30:08.871 "raid_level": "raid1", 00:30:08.871 "superblock": true, 00:30:08.871 "num_base_bdevs": 4, 00:30:08.871 "num_base_bdevs_discovered": 3, 00:30:08.871 "num_base_bdevs_operational": 3, 00:30:08.871 "base_bdevs_list": [ 00:30:08.871 { 00:30:08.871 "name": "spare", 00:30:08.871 "uuid": "cee1f1ff-d835-5a99-99be-fc841ed6544f", 00:30:08.871 "is_configured": true, 00:30:08.871 "data_offset": 2048, 00:30:08.871 "data_size": 63488 00:30:08.871 }, 00:30:08.871 { 00:30:08.872 "name": null, 00:30:08.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:08.872 "is_configured": false, 00:30:08.872 "data_offset": 2048, 00:30:08.872 "data_size": 63488 00:30:08.872 }, 00:30:08.872 { 00:30:08.872 "name": "BaseBdev3", 00:30:08.872 "uuid": "6dc76aee-85c8-56b3-99c5-62edc1aa9e02", 00:30:08.872 "is_configured": true, 00:30:08.872 "data_offset": 2048, 00:30:08.872 "data_size": 63488 00:30:08.872 }, 00:30:08.872 { 00:30:08.872 "name": "BaseBdev4", 00:30:08.872 "uuid": "55fc584c-c574-5d62-b632-393fcfd9b3c9", 00:30:08.872 "is_configured": true, 00:30:08.872 "data_offset": 2048, 00:30:08.872 "data_size": 63488 00:30:08.872 } 00:30:08.872 ] 00:30:08.872 }' 00:30:08.872 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:08.872 15:59:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:09.140 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:09.140 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:09.140 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:09.140 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:09.140 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:09.140 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:09.140 15:59:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.140 15:59:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:09.140 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:09.140 15:59:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.140 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:09.140 "name": "raid_bdev1", 00:30:09.140 "uuid": "071f96d8-a2f3-4522-86f2-b86bec403375", 00:30:09.140 "strip_size_kb": 0, 00:30:09.140 "state": "online", 00:30:09.140 "raid_level": "raid1", 00:30:09.140 "superblock": true, 00:30:09.140 "num_base_bdevs": 4, 00:30:09.140 "num_base_bdevs_discovered": 3, 00:30:09.140 "num_base_bdevs_operational": 3, 00:30:09.140 "base_bdevs_list": [ 00:30:09.140 { 00:30:09.140 "name": "spare", 00:30:09.140 "uuid": "cee1f1ff-d835-5a99-99be-fc841ed6544f", 00:30:09.140 "is_configured": true, 00:30:09.140 "data_offset": 2048, 00:30:09.140 "data_size": 63488 00:30:09.140 }, 00:30:09.140 { 00:30:09.140 "name": null, 00:30:09.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:09.140 "is_configured": false, 00:30:09.140 "data_offset": 2048, 00:30:09.140 "data_size": 63488 00:30:09.140 }, 00:30:09.140 { 00:30:09.140 "name": "BaseBdev3", 00:30:09.140 "uuid": "6dc76aee-85c8-56b3-99c5-62edc1aa9e02", 00:30:09.140 "is_configured": true, 00:30:09.140 "data_offset": 2048, 00:30:09.140 "data_size": 63488 00:30:09.140 }, 00:30:09.140 { 00:30:09.140 "name": "BaseBdev4", 00:30:09.140 "uuid": "55fc584c-c574-5d62-b632-393fcfd9b3c9", 00:30:09.140 "is_configured": true, 00:30:09.140 "data_offset": 2048, 00:30:09.140 "data_size": 63488 00:30:09.140 } 00:30:09.140 ] 00:30:09.140 }' 00:30:09.140 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:09.140 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:09.140 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:09.141 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:09.141 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:30:09.141 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:09.141 15:59:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.141 15:59:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:09.141 15:59:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.398 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:30:09.398 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:30:09.398 15:59:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.398 15:59:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:09.398 [2024-11-05 15:59:41.576401] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:09.398 15:59:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.398 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:09.398 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:09.398 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:09.398 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:09.398 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:09.398 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:09.398 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:09.398 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:09.398 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:09.398 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:09.398 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:09.398 15:59:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.398 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:09.398 15:59:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:09.398 15:59:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.398 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:09.398 "name": "raid_bdev1", 00:30:09.398 "uuid": "071f96d8-a2f3-4522-86f2-b86bec403375", 00:30:09.398 "strip_size_kb": 0, 00:30:09.398 "state": "online", 00:30:09.398 "raid_level": "raid1", 00:30:09.398 "superblock": true, 00:30:09.398 "num_base_bdevs": 4, 00:30:09.398 "num_base_bdevs_discovered": 2, 00:30:09.398 "num_base_bdevs_operational": 2, 00:30:09.398 "base_bdevs_list": [ 00:30:09.398 { 00:30:09.398 "name": null, 00:30:09.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:09.398 "is_configured": false, 00:30:09.398 "data_offset": 0, 00:30:09.398 "data_size": 63488 00:30:09.398 }, 00:30:09.398 { 00:30:09.398 "name": null, 00:30:09.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:09.398 "is_configured": false, 00:30:09.398 "data_offset": 2048, 00:30:09.398 "data_size": 63488 00:30:09.398 }, 00:30:09.398 { 00:30:09.398 "name": "BaseBdev3", 00:30:09.398 "uuid": "6dc76aee-85c8-56b3-99c5-62edc1aa9e02", 00:30:09.398 "is_configured": true, 00:30:09.398 "data_offset": 2048, 00:30:09.398 "data_size": 63488 00:30:09.398 }, 00:30:09.398 { 00:30:09.398 "name": "BaseBdev4", 00:30:09.398 "uuid": "55fc584c-c574-5d62-b632-393fcfd9b3c9", 00:30:09.399 "is_configured": true, 00:30:09.399 "data_offset": 2048, 00:30:09.399 "data_size": 63488 00:30:09.399 } 00:30:09.399 ] 00:30:09.399 }' 00:30:09.399 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:09.399 15:59:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:09.656 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:30:09.656 15:59:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.656 15:59:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:09.656 [2024-11-05 15:59:41.900490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:09.656 [2024-11-05 15:59:41.900636] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:30:09.656 [2024-11-05 15:59:41.900648] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:09.656 [2024-11-05 15:59:41.900685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:09.656 [2024-11-05 15:59:41.908158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:30:09.656 15:59:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.656 15:59:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:30:09.656 [2024-11-05 15:59:41.909715] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:10.590 15:59:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:10.590 15:59:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:10.590 15:59:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:10.590 15:59:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:10.590 15:59:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:10.590 15:59:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:10.590 15:59:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:10.590 15:59:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.590 15:59:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:10.590 15:59:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.590 15:59:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:10.590 "name": "raid_bdev1", 00:30:10.590 "uuid": "071f96d8-a2f3-4522-86f2-b86bec403375", 00:30:10.590 "strip_size_kb": 0, 00:30:10.590 "state": "online", 00:30:10.590 "raid_level": "raid1", 00:30:10.590 "superblock": true, 00:30:10.590 "num_base_bdevs": 4, 00:30:10.590 "num_base_bdevs_discovered": 3, 00:30:10.590 "num_base_bdevs_operational": 3, 00:30:10.590 "process": { 00:30:10.590 "type": "rebuild", 00:30:10.590 "target": "spare", 00:30:10.590 "progress": { 00:30:10.590 "blocks": 20480, 00:30:10.590 "percent": 32 00:30:10.590 } 00:30:10.590 }, 00:30:10.590 "base_bdevs_list": [ 00:30:10.590 { 00:30:10.590 "name": "spare", 00:30:10.590 "uuid": "cee1f1ff-d835-5a99-99be-fc841ed6544f", 00:30:10.590 "is_configured": true, 00:30:10.590 "data_offset": 2048, 00:30:10.590 "data_size": 63488 00:30:10.590 }, 00:30:10.590 { 00:30:10.590 "name": null, 00:30:10.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:10.590 "is_configured": false, 00:30:10.590 "data_offset": 2048, 00:30:10.590 "data_size": 63488 00:30:10.590 }, 00:30:10.590 { 00:30:10.590 "name": "BaseBdev3", 00:30:10.590 "uuid": "6dc76aee-85c8-56b3-99c5-62edc1aa9e02", 00:30:10.590 "is_configured": true, 00:30:10.590 "data_offset": 2048, 00:30:10.590 "data_size": 63488 00:30:10.590 }, 00:30:10.590 { 00:30:10.590 "name": "BaseBdev4", 00:30:10.590 "uuid": "55fc584c-c574-5d62-b632-393fcfd9b3c9", 00:30:10.590 "is_configured": true, 00:30:10.590 "data_offset": 2048, 00:30:10.590 "data_size": 63488 00:30:10.590 } 00:30:10.590 ] 00:30:10.590 }' 00:30:10.590 15:59:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:10.590 15:59:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:10.590 15:59:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:10.848 15:59:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:10.848 15:59:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:30:10.848 15:59:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.848 15:59:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:10.848 [2024-11-05 15:59:43.015933] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:10.848 [2024-11-05 15:59:43.115144] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:10.848 [2024-11-05 15:59:43.115197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:10.848 [2024-11-05 15:59:43.115213] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:10.848 [2024-11-05 15:59:43.115219] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:10.848 15:59:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.848 15:59:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:10.848 15:59:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:10.848 15:59:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:10.848 15:59:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:10.848 15:59:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:10.848 15:59:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:10.848 15:59:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:10.848 15:59:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:10.848 15:59:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:10.848 15:59:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:10.848 15:59:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:10.848 15:59:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.848 15:59:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:10.848 15:59:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:10.848 15:59:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.848 15:59:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:10.848 "name": "raid_bdev1", 00:30:10.848 "uuid": "071f96d8-a2f3-4522-86f2-b86bec403375", 00:30:10.848 "strip_size_kb": 0, 00:30:10.848 "state": "online", 00:30:10.848 "raid_level": "raid1", 00:30:10.848 "superblock": true, 00:30:10.848 "num_base_bdevs": 4, 00:30:10.848 "num_base_bdevs_discovered": 2, 00:30:10.848 "num_base_bdevs_operational": 2, 00:30:10.848 "base_bdevs_list": [ 00:30:10.848 { 00:30:10.848 "name": null, 00:30:10.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:10.848 "is_configured": false, 00:30:10.848 "data_offset": 0, 00:30:10.848 "data_size": 63488 00:30:10.848 }, 00:30:10.848 { 00:30:10.848 "name": null, 00:30:10.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:10.848 "is_configured": false, 00:30:10.848 "data_offset": 2048, 00:30:10.848 "data_size": 63488 00:30:10.848 }, 00:30:10.848 { 00:30:10.848 "name": "BaseBdev3", 00:30:10.848 "uuid": "6dc76aee-85c8-56b3-99c5-62edc1aa9e02", 00:30:10.848 "is_configured": true, 00:30:10.848 "data_offset": 2048, 00:30:10.848 "data_size": 63488 00:30:10.848 }, 00:30:10.848 { 00:30:10.848 "name": "BaseBdev4", 00:30:10.848 "uuid": "55fc584c-c574-5d62-b632-393fcfd9b3c9", 00:30:10.848 "is_configured": true, 00:30:10.848 "data_offset": 2048, 00:30:10.848 "data_size": 63488 00:30:10.848 } 00:30:10.848 ] 00:30:10.848 }' 00:30:10.848 15:59:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:10.848 15:59:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:11.107 15:59:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:30:11.107 15:59:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.107 15:59:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:11.107 [2024-11-05 15:59:43.475442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:11.107 [2024-11-05 15:59:43.475493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:11.107 [2024-11-05 15:59:43.475510] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:30:11.107 [2024-11-05 15:59:43.475517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:11.107 [2024-11-05 15:59:43.475894] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:11.107 [2024-11-05 15:59:43.475910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:11.107 [2024-11-05 15:59:43.475982] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:11.107 [2024-11-05 15:59:43.475992] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:30:11.107 [2024-11-05 15:59:43.476003] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:11.107 [2024-11-05 15:59:43.476023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:11.107 [2024-11-05 15:59:43.483633] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:30:11.107 spare 00:30:11.107 15:59:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.107 15:59:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:30:11.107 [2024-11-05 15:59:43.485173] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:12.481 15:59:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:12.481 15:59:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:12.481 15:59:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:12.481 15:59:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:12.481 15:59:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:12.481 15:59:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:12.481 15:59:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:12.481 15:59:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.481 15:59:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:12.481 15:59:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.481 15:59:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:12.481 "name": "raid_bdev1", 00:30:12.481 "uuid": "071f96d8-a2f3-4522-86f2-b86bec403375", 00:30:12.481 "strip_size_kb": 0, 00:30:12.481 "state": "online", 00:30:12.481 "raid_level": "raid1", 00:30:12.481 "superblock": true, 00:30:12.481 "num_base_bdevs": 4, 00:30:12.481 "num_base_bdevs_discovered": 3, 00:30:12.481 "num_base_bdevs_operational": 3, 00:30:12.481 "process": { 00:30:12.481 "type": "rebuild", 00:30:12.481 "target": "spare", 00:30:12.481 "progress": { 00:30:12.481 "blocks": 20480, 00:30:12.481 "percent": 32 00:30:12.481 } 00:30:12.481 }, 00:30:12.481 "base_bdevs_list": [ 00:30:12.481 { 00:30:12.481 "name": "spare", 00:30:12.481 "uuid": "cee1f1ff-d835-5a99-99be-fc841ed6544f", 00:30:12.481 "is_configured": true, 00:30:12.481 "data_offset": 2048, 00:30:12.481 "data_size": 63488 00:30:12.481 }, 00:30:12.481 { 00:30:12.481 "name": null, 00:30:12.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:12.481 "is_configured": false, 00:30:12.481 "data_offset": 2048, 00:30:12.481 "data_size": 63488 00:30:12.481 }, 00:30:12.481 { 00:30:12.481 "name": "BaseBdev3", 00:30:12.481 "uuid": "6dc76aee-85c8-56b3-99c5-62edc1aa9e02", 00:30:12.481 "is_configured": true, 00:30:12.481 "data_offset": 2048, 00:30:12.481 "data_size": 63488 00:30:12.481 }, 00:30:12.481 { 00:30:12.481 "name": "BaseBdev4", 00:30:12.481 "uuid": "55fc584c-c574-5d62-b632-393fcfd9b3c9", 00:30:12.481 "is_configured": true, 00:30:12.481 "data_offset": 2048, 00:30:12.481 "data_size": 63488 00:30:12.481 } 00:30:12.481 ] 00:30:12.481 }' 00:30:12.481 15:59:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:12.481 15:59:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:12.481 15:59:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:12.481 15:59:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:12.481 15:59:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:30:12.481 15:59:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.481 15:59:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:12.481 [2024-11-05 15:59:44.579254] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:12.481 [2024-11-05 15:59:44.589730] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:12.481 [2024-11-05 15:59:44.589864] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:12.481 [2024-11-05 15:59:44.589951] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:12.481 [2024-11-05 15:59:44.589972] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:12.481 15:59:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.481 15:59:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:12.481 15:59:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:12.482 15:59:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:12.482 15:59:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:12.482 15:59:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:12.482 15:59:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:12.482 15:59:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:12.482 15:59:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:12.482 15:59:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:12.482 15:59:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:12.482 15:59:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:12.482 15:59:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:12.482 15:59:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.482 15:59:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:12.482 15:59:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.482 15:59:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:12.482 "name": "raid_bdev1", 00:30:12.482 "uuid": "071f96d8-a2f3-4522-86f2-b86bec403375", 00:30:12.482 "strip_size_kb": 0, 00:30:12.482 "state": "online", 00:30:12.482 "raid_level": "raid1", 00:30:12.482 "superblock": true, 00:30:12.482 "num_base_bdevs": 4, 00:30:12.482 "num_base_bdevs_discovered": 2, 00:30:12.482 "num_base_bdevs_operational": 2, 00:30:12.482 "base_bdevs_list": [ 00:30:12.482 { 00:30:12.482 "name": null, 00:30:12.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:12.482 "is_configured": false, 00:30:12.482 "data_offset": 0, 00:30:12.482 "data_size": 63488 00:30:12.482 }, 00:30:12.482 { 00:30:12.482 "name": null, 00:30:12.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:12.482 "is_configured": false, 00:30:12.482 "data_offset": 2048, 00:30:12.482 "data_size": 63488 00:30:12.482 }, 00:30:12.482 { 00:30:12.482 "name": "BaseBdev3", 00:30:12.482 "uuid": "6dc76aee-85c8-56b3-99c5-62edc1aa9e02", 00:30:12.482 "is_configured": true, 00:30:12.482 "data_offset": 2048, 00:30:12.482 "data_size": 63488 00:30:12.482 }, 00:30:12.482 { 00:30:12.482 "name": "BaseBdev4", 00:30:12.482 "uuid": "55fc584c-c574-5d62-b632-393fcfd9b3c9", 00:30:12.482 "is_configured": true, 00:30:12.482 "data_offset": 2048, 00:30:12.482 "data_size": 63488 00:30:12.482 } 00:30:12.482 ] 00:30:12.482 }' 00:30:12.482 15:59:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:12.482 15:59:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:12.740 15:59:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:12.740 15:59:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:12.740 15:59:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:12.740 15:59:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:12.740 15:59:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:12.740 15:59:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:12.740 15:59:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:12.740 15:59:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.740 15:59:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:12.740 15:59:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.740 15:59:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:12.740 "name": "raid_bdev1", 00:30:12.740 "uuid": "071f96d8-a2f3-4522-86f2-b86bec403375", 00:30:12.740 "strip_size_kb": 0, 00:30:12.740 "state": "online", 00:30:12.740 "raid_level": "raid1", 00:30:12.740 "superblock": true, 00:30:12.740 "num_base_bdevs": 4, 00:30:12.740 "num_base_bdevs_discovered": 2, 00:30:12.740 "num_base_bdevs_operational": 2, 00:30:12.740 "base_bdevs_list": [ 00:30:12.740 { 00:30:12.740 "name": null, 00:30:12.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:12.740 "is_configured": false, 00:30:12.740 "data_offset": 0, 00:30:12.740 "data_size": 63488 00:30:12.740 }, 00:30:12.740 { 00:30:12.740 "name": null, 00:30:12.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:12.740 "is_configured": false, 00:30:12.740 "data_offset": 2048, 00:30:12.740 "data_size": 63488 00:30:12.740 }, 00:30:12.740 { 00:30:12.740 "name": "BaseBdev3", 00:30:12.740 "uuid": "6dc76aee-85c8-56b3-99c5-62edc1aa9e02", 00:30:12.740 "is_configured": true, 00:30:12.740 "data_offset": 2048, 00:30:12.740 "data_size": 63488 00:30:12.740 }, 00:30:12.740 { 00:30:12.740 "name": "BaseBdev4", 00:30:12.740 "uuid": "55fc584c-c574-5d62-b632-393fcfd9b3c9", 00:30:12.740 "is_configured": true, 00:30:12.740 "data_offset": 2048, 00:30:12.740 "data_size": 63488 00:30:12.740 } 00:30:12.740 ] 00:30:12.740 }' 00:30:12.740 15:59:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:12.740 15:59:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:12.740 15:59:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:12.740 15:59:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:12.740 15:59:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:30:12.740 15:59:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.740 15:59:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:12.740 15:59:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.740 15:59:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:12.740 15:59:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.740 15:59:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:12.740 [2024-11-05 15:59:45.049718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:12.740 [2024-11-05 15:59:45.049763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:12.740 [2024-11-05 15:59:45.049779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:30:12.740 [2024-11-05 15:59:45.049787] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:12.740 [2024-11-05 15:59:45.050133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:12.740 [2024-11-05 15:59:45.050157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:12.740 [2024-11-05 15:59:45.050215] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:30:12.740 [2024-11-05 15:59:45.050230] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:30:12.740 [2024-11-05 15:59:45.050236] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:12.740 [2024-11-05 15:59:45.050247] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:30:12.740 BaseBdev1 00:30:12.740 15:59:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.740 15:59:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:30:13.672 15:59:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:13.672 15:59:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:13.673 15:59:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:13.673 15:59:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:13.673 15:59:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:13.673 15:59:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:13.673 15:59:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:13.673 15:59:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:13.673 15:59:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:13.673 15:59:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:13.673 15:59:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:13.673 15:59:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:13.673 15:59:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.673 15:59:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:13.673 15:59:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.931 15:59:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:13.931 "name": "raid_bdev1", 00:30:13.931 "uuid": "071f96d8-a2f3-4522-86f2-b86bec403375", 00:30:13.931 "strip_size_kb": 0, 00:30:13.931 "state": "online", 00:30:13.931 "raid_level": "raid1", 00:30:13.931 "superblock": true, 00:30:13.931 "num_base_bdevs": 4, 00:30:13.931 "num_base_bdevs_discovered": 2, 00:30:13.931 "num_base_bdevs_operational": 2, 00:30:13.931 "base_bdevs_list": [ 00:30:13.931 { 00:30:13.931 "name": null, 00:30:13.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:13.931 "is_configured": false, 00:30:13.931 "data_offset": 0, 00:30:13.931 "data_size": 63488 00:30:13.931 }, 00:30:13.931 { 00:30:13.931 "name": null, 00:30:13.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:13.931 "is_configured": false, 00:30:13.931 "data_offset": 2048, 00:30:13.931 "data_size": 63488 00:30:13.931 }, 00:30:13.931 { 00:30:13.931 "name": "BaseBdev3", 00:30:13.931 "uuid": "6dc76aee-85c8-56b3-99c5-62edc1aa9e02", 00:30:13.931 "is_configured": true, 00:30:13.931 "data_offset": 2048, 00:30:13.931 "data_size": 63488 00:30:13.931 }, 00:30:13.931 { 00:30:13.931 "name": "BaseBdev4", 00:30:13.931 "uuid": "55fc584c-c574-5d62-b632-393fcfd9b3c9", 00:30:13.931 "is_configured": true, 00:30:13.931 "data_offset": 2048, 00:30:13.931 "data_size": 63488 00:30:13.931 } 00:30:13.931 ] 00:30:13.931 }' 00:30:13.931 15:59:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:13.931 15:59:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:14.189 15:59:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:14.189 15:59:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:14.189 15:59:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:14.189 15:59:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:14.189 15:59:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:14.189 15:59:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:14.189 15:59:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.189 15:59:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:14.189 15:59:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:14.189 15:59:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.189 15:59:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:14.189 "name": "raid_bdev1", 00:30:14.189 "uuid": "071f96d8-a2f3-4522-86f2-b86bec403375", 00:30:14.189 "strip_size_kb": 0, 00:30:14.189 "state": "online", 00:30:14.189 "raid_level": "raid1", 00:30:14.189 "superblock": true, 00:30:14.189 "num_base_bdevs": 4, 00:30:14.189 "num_base_bdevs_discovered": 2, 00:30:14.189 "num_base_bdevs_operational": 2, 00:30:14.189 "base_bdevs_list": [ 00:30:14.189 { 00:30:14.189 "name": null, 00:30:14.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:14.189 "is_configured": false, 00:30:14.189 "data_offset": 0, 00:30:14.189 "data_size": 63488 00:30:14.189 }, 00:30:14.189 { 00:30:14.189 "name": null, 00:30:14.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:14.189 "is_configured": false, 00:30:14.189 "data_offset": 2048, 00:30:14.189 "data_size": 63488 00:30:14.189 }, 00:30:14.189 { 00:30:14.189 "name": "BaseBdev3", 00:30:14.189 "uuid": "6dc76aee-85c8-56b3-99c5-62edc1aa9e02", 00:30:14.189 "is_configured": true, 00:30:14.189 "data_offset": 2048, 00:30:14.189 "data_size": 63488 00:30:14.189 }, 00:30:14.189 { 00:30:14.189 "name": "BaseBdev4", 00:30:14.189 "uuid": "55fc584c-c574-5d62-b632-393fcfd9b3c9", 00:30:14.189 "is_configured": true, 00:30:14.189 "data_offset": 2048, 00:30:14.189 "data_size": 63488 00:30:14.189 } 00:30:14.189 ] 00:30:14.189 }' 00:30:14.189 15:59:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:14.189 15:59:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:14.189 15:59:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:14.189 15:59:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:14.189 15:59:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:14.189 15:59:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:30:14.189 15:59:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:14.189 15:59:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:14.189 15:59:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:14.189 15:59:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:14.189 15:59:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:14.189 15:59:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:14.189 15:59:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.189 15:59:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:14.189 [2024-11-05 15:59:46.478024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:14.189 [2024-11-05 15:59:46.478254] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:30:14.189 [2024-11-05 15:59:46.478337] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:14.189 request: 00:30:14.189 { 00:30:14.189 "base_bdev": "BaseBdev1", 00:30:14.189 "raid_bdev": "raid_bdev1", 00:30:14.189 "method": "bdev_raid_add_base_bdev", 00:30:14.189 "req_id": 1 00:30:14.189 } 00:30:14.189 Got JSON-RPC error response 00:30:14.189 response: 00:30:14.189 { 00:30:14.189 "code": -22, 00:30:14.189 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:30:14.189 } 00:30:14.189 15:59:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:14.189 15:59:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:30:14.189 15:59:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:14.189 15:59:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:14.189 15:59:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:14.189 15:59:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:30:15.122 15:59:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:15.122 15:59:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:15.122 15:59:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:15.122 15:59:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:15.122 15:59:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:15.122 15:59:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:15.122 15:59:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:15.122 15:59:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:15.122 15:59:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:15.122 15:59:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:15.122 15:59:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:15.122 15:59:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:15.122 15:59:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.122 15:59:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:15.122 15:59:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.122 15:59:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:15.122 "name": "raid_bdev1", 00:30:15.122 "uuid": "071f96d8-a2f3-4522-86f2-b86bec403375", 00:30:15.122 "strip_size_kb": 0, 00:30:15.122 "state": "online", 00:30:15.122 "raid_level": "raid1", 00:30:15.122 "superblock": true, 00:30:15.122 "num_base_bdevs": 4, 00:30:15.122 "num_base_bdevs_discovered": 2, 00:30:15.122 "num_base_bdevs_operational": 2, 00:30:15.122 "base_bdevs_list": [ 00:30:15.122 { 00:30:15.122 "name": null, 00:30:15.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:15.122 "is_configured": false, 00:30:15.122 "data_offset": 0, 00:30:15.122 "data_size": 63488 00:30:15.122 }, 00:30:15.122 { 00:30:15.122 "name": null, 00:30:15.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:15.122 "is_configured": false, 00:30:15.122 "data_offset": 2048, 00:30:15.122 "data_size": 63488 00:30:15.122 }, 00:30:15.122 { 00:30:15.122 "name": "BaseBdev3", 00:30:15.122 "uuid": "6dc76aee-85c8-56b3-99c5-62edc1aa9e02", 00:30:15.122 "is_configured": true, 00:30:15.122 "data_offset": 2048, 00:30:15.122 "data_size": 63488 00:30:15.122 }, 00:30:15.122 { 00:30:15.122 "name": "BaseBdev4", 00:30:15.122 "uuid": "55fc584c-c574-5d62-b632-393fcfd9b3c9", 00:30:15.122 "is_configured": true, 00:30:15.122 "data_offset": 2048, 00:30:15.122 "data_size": 63488 00:30:15.122 } 00:30:15.122 ] 00:30:15.122 }' 00:30:15.122 15:59:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:15.122 15:59:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:15.381 15:59:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:15.381 15:59:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:15.381 15:59:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:15.381 15:59:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:15.381 15:59:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:15.381 15:59:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:15.381 15:59:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:15.381 15:59:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.381 15:59:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:15.381 15:59:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.381 15:59:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:15.381 "name": "raid_bdev1", 00:30:15.381 "uuid": "071f96d8-a2f3-4522-86f2-b86bec403375", 00:30:15.381 "strip_size_kb": 0, 00:30:15.381 "state": "online", 00:30:15.381 "raid_level": "raid1", 00:30:15.381 "superblock": true, 00:30:15.381 "num_base_bdevs": 4, 00:30:15.381 "num_base_bdevs_discovered": 2, 00:30:15.381 "num_base_bdevs_operational": 2, 00:30:15.381 "base_bdevs_list": [ 00:30:15.381 { 00:30:15.381 "name": null, 00:30:15.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:15.381 "is_configured": false, 00:30:15.381 "data_offset": 0, 00:30:15.381 "data_size": 63488 00:30:15.381 }, 00:30:15.381 { 00:30:15.381 "name": null, 00:30:15.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:15.381 "is_configured": false, 00:30:15.381 "data_offset": 2048, 00:30:15.381 "data_size": 63488 00:30:15.381 }, 00:30:15.381 { 00:30:15.381 "name": "BaseBdev3", 00:30:15.381 "uuid": "6dc76aee-85c8-56b3-99c5-62edc1aa9e02", 00:30:15.381 "is_configured": true, 00:30:15.381 "data_offset": 2048, 00:30:15.381 "data_size": 63488 00:30:15.381 }, 00:30:15.381 { 00:30:15.381 "name": "BaseBdev4", 00:30:15.381 "uuid": "55fc584c-c574-5d62-b632-393fcfd9b3c9", 00:30:15.381 "is_configured": true, 00:30:15.381 "data_offset": 2048, 00:30:15.381 "data_size": 63488 00:30:15.381 } 00:30:15.381 ] 00:30:15.381 }' 00:30:15.381 15:59:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:15.638 15:59:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:15.638 15:59:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:15.639 15:59:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:15.639 15:59:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75607 00:30:15.639 15:59:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 75607 ']' 00:30:15.639 15:59:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 75607 00:30:15.639 15:59:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:30:15.639 15:59:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:15.639 15:59:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75607 00:30:15.639 killing process with pid 75607 00:30:15.639 Received shutdown signal, test time was about 60.000000 seconds 00:30:15.639 00:30:15.639 Latency(us) 00:30:15.639 [2024-11-05T15:59:48.054Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:15.639 [2024-11-05T15:59:48.054Z] =================================================================================================================== 00:30:15.639 [2024-11-05T15:59:48.054Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:15.639 15:59:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:15.639 15:59:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:15.639 15:59:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75607' 00:30:15.639 15:59:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 75607 00:30:15.639 [2024-11-05 15:59:47.875902] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:15.639 15:59:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 75607 00:30:15.639 [2024-11-05 15:59:47.876000] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:15.639 [2024-11-05 15:59:47.876050] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:15.639 [2024-11-05 15:59:47.876058] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:30:15.896 [2024-11-05 15:59:48.110067] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:16.462 15:59:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:30:16.462 00:30:16.462 real 0m21.465s 00:30:16.462 user 0m24.929s 00:30:16.462 sys 0m3.012s 00:30:16.462 15:59:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:16.462 ************************************ 00:30:16.462 END TEST raid_rebuild_test_sb 00:30:16.462 ************************************ 00:30:16.462 15:59:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:16.462 15:59:48 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:30:16.462 15:59:48 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:30:16.462 15:59:48 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:16.462 15:59:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:16.462 ************************************ 00:30:16.462 START TEST raid_rebuild_test_io 00:30:16.462 ************************************ 00:30:16.462 15:59:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false true true 00:30:16.462 15:59:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:30:16.462 15:59:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:30:16.462 15:59:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:30:16.462 15:59:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:30:16.462 15:59:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:30:16.462 15:59:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:30:16.462 15:59:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:16.462 15:59:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:30:16.462 15:59:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:30:16.462 15:59:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:16.462 15:59:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:30:16.462 15:59:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:30:16.462 15:59:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:16.462 15:59:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:30:16.462 15:59:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:30:16.462 15:59:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:16.462 15:59:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:30:16.462 15:59:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:30:16.462 15:59:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:16.462 15:59:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:30:16.462 15:59:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:30:16.462 15:59:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:30:16.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:16.463 15:59:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:30:16.463 15:59:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:30:16.463 15:59:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:30:16.463 15:59:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:30:16.463 15:59:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:30:16.463 15:59:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:30:16.463 15:59:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:30:16.463 15:59:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76331 00:30:16.463 15:59:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76331 00:30:16.463 15:59:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 76331 ']' 00:30:16.463 15:59:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:16.463 15:59:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:16.463 15:59:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:16.463 15:59:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:30:16.463 15:59:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:16.463 15:59:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:16.463 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:16.463 Zero copy mechanism will not be used. 00:30:16.463 [2024-11-05 15:59:48.764463] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:30:16.463 [2024-11-05 15:59:48.764580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76331 ] 00:30:16.720 [2024-11-05 15:59:48.920592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:16.720 [2024-11-05 15:59:49.002970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:16.720 [2024-11-05 15:59:49.113545] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:16.720 [2024-11-05 15:59:49.113574] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:17.285 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:17.285 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:30:17.285 15:59:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:17.285 15:59:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:17.285 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.285 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:17.285 BaseBdev1_malloc 00:30:17.285 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.285 15:59:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:17.285 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.285 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:17.285 [2024-11-05 15:59:49.588169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:17.285 [2024-11-05 15:59:49.588221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:17.285 [2024-11-05 15:59:49.588238] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:30:17.285 [2024-11-05 15:59:49.588249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:17.285 [2024-11-05 15:59:49.589994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:17.285 [2024-11-05 15:59:49.590025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:17.285 BaseBdev1 00:30:17.285 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.285 15:59:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:17.285 15:59:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:17.285 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.285 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:17.285 BaseBdev2_malloc 00:30:17.285 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.285 15:59:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:30:17.285 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.285 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:17.285 [2024-11-05 15:59:49.619967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:30:17.285 [2024-11-05 15:59:49.620109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:17.285 [2024-11-05 15:59:49.620127] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:30:17.285 [2024-11-05 15:59:49.620137] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:17.285 [2024-11-05 15:59:49.621838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:17.285 [2024-11-05 15:59:49.621877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:17.285 BaseBdev2 00:30:17.285 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.285 15:59:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:17.285 15:59:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:30:17.285 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.285 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:17.285 BaseBdev3_malloc 00:30:17.285 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.285 15:59:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:30:17.285 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.285 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:17.285 [2024-11-05 15:59:49.664402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:30:17.285 [2024-11-05 15:59:49.664447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:17.285 [2024-11-05 15:59:49.664462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:30:17.285 [2024-11-05 15:59:49.664471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:17.285 [2024-11-05 15:59:49.666173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:17.285 [2024-11-05 15:59:49.666204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:30:17.285 BaseBdev3 00:30:17.285 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.285 15:59:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:17.285 15:59:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:30:17.285 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.285 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:17.285 BaseBdev4_malloc 00:30:17.285 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.285 15:59:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:30:17.285 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.285 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:17.286 [2024-11-05 15:59:49.696040] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:30:17.286 [2024-11-05 15:59:49.696080] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:17.286 [2024-11-05 15:59:49.696095] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:30:17.286 [2024-11-05 15:59:49.696104] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:17.286 [2024-11-05 15:59:49.697792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:17.286 [2024-11-05 15:59:49.697824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:30:17.286 BaseBdev4 00:30:17.286 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.286 15:59:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:30:17.286 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.286 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:17.544 spare_malloc 00:30:17.544 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.544 15:59:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:30:17.544 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.544 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:17.544 spare_delay 00:30:17.544 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.544 15:59:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:30:17.544 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.544 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:17.544 [2024-11-05 15:59:49.735448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:17.544 [2024-11-05 15:59:49.735581] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:17.544 [2024-11-05 15:59:49.735600] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:30:17.544 [2024-11-05 15:59:49.735609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:17.544 [2024-11-05 15:59:49.737306] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:17.544 [2024-11-05 15:59:49.737333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:17.544 spare 00:30:17.544 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.544 15:59:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:30:17.544 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.544 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:17.544 [2024-11-05 15:59:49.743492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:17.544 [2024-11-05 15:59:49.744997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:17.544 [2024-11-05 15:59:49.745047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:17.544 [2024-11-05 15:59:49.745087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:17.544 [2024-11-05 15:59:49.745151] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:30:17.544 [2024-11-05 15:59:49.745161] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:30:17.544 [2024-11-05 15:59:49.745360] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:30:17.544 [2024-11-05 15:59:49.745477] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:30:17.544 [2024-11-05 15:59:49.745485] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:30:17.544 [2024-11-05 15:59:49.745591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:17.544 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.544 15:59:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:30:17.544 15:59:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:17.544 15:59:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:17.544 15:59:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:17.544 15:59:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:17.544 15:59:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:17.544 15:59:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:17.544 15:59:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:17.544 15:59:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:17.544 15:59:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:17.544 15:59:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:17.544 15:59:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:17.544 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.544 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:17.544 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.544 15:59:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:17.544 "name": "raid_bdev1", 00:30:17.544 "uuid": "40324fe7-53dc-4ade-aef9-e6ed4afb138f", 00:30:17.544 "strip_size_kb": 0, 00:30:17.544 "state": "online", 00:30:17.544 "raid_level": "raid1", 00:30:17.544 "superblock": false, 00:30:17.544 "num_base_bdevs": 4, 00:30:17.544 "num_base_bdevs_discovered": 4, 00:30:17.544 "num_base_bdevs_operational": 4, 00:30:17.544 "base_bdevs_list": [ 00:30:17.544 { 00:30:17.544 "name": "BaseBdev1", 00:30:17.544 "uuid": "b9968824-72c0-5154-9e7e-1bc8db6ad9f6", 00:30:17.544 "is_configured": true, 00:30:17.544 "data_offset": 0, 00:30:17.544 "data_size": 65536 00:30:17.544 }, 00:30:17.544 { 00:30:17.544 "name": "BaseBdev2", 00:30:17.544 "uuid": "0405a613-60cf-5d88-8d40-4771481731ff", 00:30:17.544 "is_configured": true, 00:30:17.544 "data_offset": 0, 00:30:17.544 "data_size": 65536 00:30:17.544 }, 00:30:17.544 { 00:30:17.544 "name": "BaseBdev3", 00:30:17.544 "uuid": "8862396d-b3de-5902-8167-b3f217617548", 00:30:17.544 "is_configured": true, 00:30:17.544 "data_offset": 0, 00:30:17.544 "data_size": 65536 00:30:17.544 }, 00:30:17.544 { 00:30:17.544 "name": "BaseBdev4", 00:30:17.544 "uuid": "26afb4c4-deb1-50b0-b63b-9bf63b64300c", 00:30:17.544 "is_configured": true, 00:30:17.544 "data_offset": 0, 00:30:17.544 "data_size": 65536 00:30:17.544 } 00:30:17.544 ] 00:30:17.544 }' 00:30:17.544 15:59:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:17.544 15:59:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:17.802 15:59:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:17.802 15:59:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.802 15:59:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:17.802 15:59:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:30:17.802 [2024-11-05 15:59:50.035891] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:17.802 15:59:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.802 15:59:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:30:17.802 15:59:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:17.802 15:59:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.802 15:59:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:30:17.802 15:59:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:17.802 15:59:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.802 15:59:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:30:17.802 15:59:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:30:17.802 15:59:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:30:17.802 15:59:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:30:17.802 15:59:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.802 15:59:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:17.802 [2024-11-05 15:59:50.103547] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:17.802 15:59:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.802 15:59:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:17.802 15:59:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:17.802 15:59:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:17.802 15:59:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:17.802 15:59:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:17.802 15:59:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:17.802 15:59:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:17.802 15:59:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:17.802 15:59:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:17.803 15:59:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:17.803 15:59:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:17.803 15:59:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.803 15:59:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:17.803 15:59:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:17.803 15:59:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.803 15:59:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:17.803 "name": "raid_bdev1", 00:30:17.803 "uuid": "40324fe7-53dc-4ade-aef9-e6ed4afb138f", 00:30:17.803 "strip_size_kb": 0, 00:30:17.803 "state": "online", 00:30:17.803 "raid_level": "raid1", 00:30:17.803 "superblock": false, 00:30:17.803 "num_base_bdevs": 4, 00:30:17.803 "num_base_bdevs_discovered": 3, 00:30:17.803 "num_base_bdevs_operational": 3, 00:30:17.803 "base_bdevs_list": [ 00:30:17.803 { 00:30:17.803 "name": null, 00:30:17.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:17.803 "is_configured": false, 00:30:17.803 "data_offset": 0, 00:30:17.803 "data_size": 65536 00:30:17.803 }, 00:30:17.803 { 00:30:17.803 "name": "BaseBdev2", 00:30:17.803 "uuid": "0405a613-60cf-5d88-8d40-4771481731ff", 00:30:17.803 "is_configured": true, 00:30:17.803 "data_offset": 0, 00:30:17.803 "data_size": 65536 00:30:17.803 }, 00:30:17.803 { 00:30:17.803 "name": "BaseBdev3", 00:30:17.803 "uuid": "8862396d-b3de-5902-8167-b3f217617548", 00:30:17.803 "is_configured": true, 00:30:17.803 "data_offset": 0, 00:30:17.803 "data_size": 65536 00:30:17.803 }, 00:30:17.803 { 00:30:17.803 "name": "BaseBdev4", 00:30:17.803 "uuid": "26afb4c4-deb1-50b0-b63b-9bf63b64300c", 00:30:17.803 "is_configured": true, 00:30:17.803 "data_offset": 0, 00:30:17.803 "data_size": 65536 00:30:17.803 } 00:30:17.803 ] 00:30:17.803 }' 00:30:17.803 15:59:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:17.803 15:59:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:17.803 [2024-11-05 15:59:50.192010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:30:17.803 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:17.803 Zero copy mechanism will not be used. 00:30:17.803 Running I/O for 60 seconds... 00:30:18.060 15:59:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:30:18.060 15:59:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.060 15:59:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:18.060 [2024-11-05 15:59:50.408915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:18.060 15:59:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.060 15:59:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:30:18.060 [2024-11-05 15:59:50.456783] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:30:18.060 [2024-11-05 15:59:50.458483] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:18.317 [2024-11-05 15:59:50.561152] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:18.317 [2024-11-05 15:59:50.561531] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:18.317 [2024-11-05 15:59:50.670124] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:18.317 [2024-11-05 15:59:50.670348] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:18.882 [2024-11-05 15:59:51.008298] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:18.882 [2024-11-05 15:59:51.121443] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:19.140 131.00 IOPS, 393.00 MiB/s [2024-11-05T15:59:51.555Z] [2024-11-05 15:59:51.350743] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:30:19.140 [2024-11-05 15:59:51.351080] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:30:19.140 15:59:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:19.140 15:59:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:19.140 15:59:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:19.140 15:59:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:19.140 15:59:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:19.140 15:59:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:19.140 15:59:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:19.140 15:59:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.140 15:59:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:19.140 15:59:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.140 15:59:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:19.140 "name": "raid_bdev1", 00:30:19.140 "uuid": "40324fe7-53dc-4ade-aef9-e6ed4afb138f", 00:30:19.140 "strip_size_kb": 0, 00:30:19.140 "state": "online", 00:30:19.140 "raid_level": "raid1", 00:30:19.140 "superblock": false, 00:30:19.140 "num_base_bdevs": 4, 00:30:19.140 "num_base_bdevs_discovered": 4, 00:30:19.140 "num_base_bdevs_operational": 4, 00:30:19.140 "process": { 00:30:19.140 "type": "rebuild", 00:30:19.140 "target": "spare", 00:30:19.140 "progress": { 00:30:19.141 "blocks": 14336, 00:30:19.141 "percent": 21 00:30:19.141 } 00:30:19.141 }, 00:30:19.141 "base_bdevs_list": [ 00:30:19.141 { 00:30:19.141 "name": "spare", 00:30:19.141 "uuid": "93b96063-65cf-5415-99cf-e654a2dd1771", 00:30:19.141 "is_configured": true, 00:30:19.141 "data_offset": 0, 00:30:19.141 "data_size": 65536 00:30:19.141 }, 00:30:19.141 { 00:30:19.141 "name": "BaseBdev2", 00:30:19.141 "uuid": "0405a613-60cf-5d88-8d40-4771481731ff", 00:30:19.141 "is_configured": true, 00:30:19.141 "data_offset": 0, 00:30:19.141 "data_size": 65536 00:30:19.141 }, 00:30:19.141 { 00:30:19.141 "name": "BaseBdev3", 00:30:19.141 "uuid": "8862396d-b3de-5902-8167-b3f217617548", 00:30:19.141 "is_configured": true, 00:30:19.141 "data_offset": 0, 00:30:19.141 "data_size": 65536 00:30:19.141 }, 00:30:19.141 { 00:30:19.141 "name": "BaseBdev4", 00:30:19.141 "uuid": "26afb4c4-deb1-50b0-b63b-9bf63b64300c", 00:30:19.141 "is_configured": true, 00:30:19.141 "data_offset": 0, 00:30:19.141 "data_size": 65536 00:30:19.141 } 00:30:19.141 ] 00:30:19.141 }' 00:30:19.141 [2024-11-05 15:59:51.482670] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:19.141 15:59:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:19.141 15:59:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:19.141 15:59:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:19.141 15:59:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:19.141 15:59:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:30:19.141 15:59:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.141 15:59:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:19.141 [2024-11-05 15:59:51.545252] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:19.398 [2024-11-05 15:59:51.591187] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:19.398 [2024-11-05 15:59:51.603325] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:19.398 [2024-11-05 15:59:51.605959] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:19.398 [2024-11-05 15:59:51.605984] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:19.398 [2024-11-05 15:59:51.605995] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:19.398 [2024-11-05 15:59:51.621170] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:30:19.398 15:59:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.398 15:59:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:19.398 15:59:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:19.398 15:59:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:19.398 15:59:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:19.398 15:59:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:19.398 15:59:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:19.398 15:59:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:19.398 15:59:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:19.398 15:59:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:19.398 15:59:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:19.398 15:59:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:19.398 15:59:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:19.398 15:59:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.398 15:59:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:19.398 15:59:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.398 15:59:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:19.398 "name": "raid_bdev1", 00:30:19.398 "uuid": "40324fe7-53dc-4ade-aef9-e6ed4afb138f", 00:30:19.398 "strip_size_kb": 0, 00:30:19.398 "state": "online", 00:30:19.398 "raid_level": "raid1", 00:30:19.398 "superblock": false, 00:30:19.398 "num_base_bdevs": 4, 00:30:19.398 "num_base_bdevs_discovered": 3, 00:30:19.398 "num_base_bdevs_operational": 3, 00:30:19.398 "base_bdevs_list": [ 00:30:19.398 { 00:30:19.398 "name": null, 00:30:19.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:19.398 "is_configured": false, 00:30:19.398 "data_offset": 0, 00:30:19.398 "data_size": 65536 00:30:19.398 }, 00:30:19.398 { 00:30:19.398 "name": "BaseBdev2", 00:30:19.398 "uuid": "0405a613-60cf-5d88-8d40-4771481731ff", 00:30:19.398 "is_configured": true, 00:30:19.398 "data_offset": 0, 00:30:19.398 "data_size": 65536 00:30:19.398 }, 00:30:19.398 { 00:30:19.398 "name": "BaseBdev3", 00:30:19.398 "uuid": "8862396d-b3de-5902-8167-b3f217617548", 00:30:19.398 "is_configured": true, 00:30:19.398 "data_offset": 0, 00:30:19.398 "data_size": 65536 00:30:19.398 }, 00:30:19.398 { 00:30:19.398 "name": "BaseBdev4", 00:30:19.398 "uuid": "26afb4c4-deb1-50b0-b63b-9bf63b64300c", 00:30:19.398 "is_configured": true, 00:30:19.398 "data_offset": 0, 00:30:19.398 "data_size": 65536 00:30:19.398 } 00:30:19.398 ] 00:30:19.398 }' 00:30:19.398 15:59:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:19.398 15:59:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:19.657 15:59:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:19.657 15:59:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:19.657 15:59:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:19.657 15:59:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:19.657 15:59:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:19.657 15:59:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:19.657 15:59:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:19.657 15:59:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.657 15:59:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:19.657 15:59:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.657 15:59:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:19.657 "name": "raid_bdev1", 00:30:19.657 "uuid": "40324fe7-53dc-4ade-aef9-e6ed4afb138f", 00:30:19.657 "strip_size_kb": 0, 00:30:19.657 "state": "online", 00:30:19.657 "raid_level": "raid1", 00:30:19.657 "superblock": false, 00:30:19.657 "num_base_bdevs": 4, 00:30:19.657 "num_base_bdevs_discovered": 3, 00:30:19.657 "num_base_bdevs_operational": 3, 00:30:19.657 "base_bdevs_list": [ 00:30:19.657 { 00:30:19.657 "name": null, 00:30:19.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:19.657 "is_configured": false, 00:30:19.657 "data_offset": 0, 00:30:19.657 "data_size": 65536 00:30:19.657 }, 00:30:19.657 { 00:30:19.657 "name": "BaseBdev2", 00:30:19.657 "uuid": "0405a613-60cf-5d88-8d40-4771481731ff", 00:30:19.657 "is_configured": true, 00:30:19.657 "data_offset": 0, 00:30:19.657 "data_size": 65536 00:30:19.657 }, 00:30:19.657 { 00:30:19.657 "name": "BaseBdev3", 00:30:19.657 "uuid": "8862396d-b3de-5902-8167-b3f217617548", 00:30:19.657 "is_configured": true, 00:30:19.657 "data_offset": 0, 00:30:19.657 "data_size": 65536 00:30:19.657 }, 00:30:19.657 { 00:30:19.657 "name": "BaseBdev4", 00:30:19.657 "uuid": "26afb4c4-deb1-50b0-b63b-9bf63b64300c", 00:30:19.657 "is_configured": true, 00:30:19.657 "data_offset": 0, 00:30:19.657 "data_size": 65536 00:30:19.657 } 00:30:19.657 ] 00:30:19.657 }' 00:30:19.657 15:59:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:19.657 15:59:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:19.657 15:59:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:19.657 15:59:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:19.657 15:59:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:30:19.657 15:59:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.657 15:59:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:19.657 [2024-11-05 15:59:52.053555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:19.915 15:59:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.915 15:59:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:30:19.915 [2024-11-05 15:59:52.099188] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:30:19.915 [2024-11-05 15:59:52.100746] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:19.915 178.50 IOPS, 535.50 MiB/s [2024-11-05T15:59:52.330Z] [2024-11-05 15:59:52.241283] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:20.172 [2024-11-05 15:59:52.372777] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:20.172 [2024-11-05 15:59:52.372998] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:20.431 [2024-11-05 15:59:52.595717] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:20.689 [2024-11-05 15:59:52.948633] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:30:20.689 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:20.689 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:20.689 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:20.689 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:20.689 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:20.689 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:20.689 15:59:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.689 15:59:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:20.689 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:20.689 15:59:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.946 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:20.946 "name": "raid_bdev1", 00:30:20.946 "uuid": "40324fe7-53dc-4ade-aef9-e6ed4afb138f", 00:30:20.946 "strip_size_kb": 0, 00:30:20.946 "state": "online", 00:30:20.946 "raid_level": "raid1", 00:30:20.946 "superblock": false, 00:30:20.946 "num_base_bdevs": 4, 00:30:20.946 "num_base_bdevs_discovered": 4, 00:30:20.946 "num_base_bdevs_operational": 4, 00:30:20.946 "process": { 00:30:20.946 "type": "rebuild", 00:30:20.946 "target": "spare", 00:30:20.946 "progress": { 00:30:20.946 "blocks": 14336, 00:30:20.946 "percent": 21 00:30:20.946 } 00:30:20.946 }, 00:30:20.946 "base_bdevs_list": [ 00:30:20.946 { 00:30:20.946 "name": "spare", 00:30:20.946 "uuid": "93b96063-65cf-5415-99cf-e654a2dd1771", 00:30:20.946 "is_configured": true, 00:30:20.946 "data_offset": 0, 00:30:20.946 "data_size": 65536 00:30:20.946 }, 00:30:20.946 { 00:30:20.946 "name": "BaseBdev2", 00:30:20.946 "uuid": "0405a613-60cf-5d88-8d40-4771481731ff", 00:30:20.946 "is_configured": true, 00:30:20.946 "data_offset": 0, 00:30:20.946 "data_size": 65536 00:30:20.946 }, 00:30:20.946 { 00:30:20.946 "name": "BaseBdev3", 00:30:20.946 "uuid": "8862396d-b3de-5902-8167-b3f217617548", 00:30:20.946 "is_configured": true, 00:30:20.946 "data_offset": 0, 00:30:20.946 "data_size": 65536 00:30:20.946 }, 00:30:20.946 { 00:30:20.946 "name": "BaseBdev4", 00:30:20.946 "uuid": "26afb4c4-deb1-50b0-b63b-9bf63b64300c", 00:30:20.946 "is_configured": true, 00:30:20.946 "data_offset": 0, 00:30:20.946 "data_size": 65536 00:30:20.946 } 00:30:20.946 ] 00:30:20.946 }' 00:30:20.946 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:20.946 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:20.946 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:20.946 [2024-11-05 15:59:53.168938] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:20.946 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:20.946 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:30:20.946 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:30:20.946 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:30:20.946 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:30:20.946 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:30:20.946 15:59:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.946 15:59:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:20.946 165.33 IOPS, 496.00 MiB/s [2024-11-05T15:59:53.361Z] [2024-11-05 15:59:53.208686] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:20.946 [2024-11-05 15:59:53.340692] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:30:20.946 [2024-11-05 15:59:53.340721] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:30:20.946 15:59:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.946 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:30:20.946 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:30:20.946 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:20.946 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:20.946 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:20.946 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:20.947 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:20.947 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:20.947 15:59:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.947 15:59:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:20.947 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:21.204 15:59:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.204 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:21.204 "name": "raid_bdev1", 00:30:21.204 "uuid": "40324fe7-53dc-4ade-aef9-e6ed4afb138f", 00:30:21.204 "strip_size_kb": 0, 00:30:21.204 "state": "online", 00:30:21.204 "raid_level": "raid1", 00:30:21.204 "superblock": false, 00:30:21.204 "num_base_bdevs": 4, 00:30:21.204 "num_base_bdevs_discovered": 3, 00:30:21.204 "num_base_bdevs_operational": 3, 00:30:21.204 "process": { 00:30:21.204 "type": "rebuild", 00:30:21.204 "target": "spare", 00:30:21.204 "progress": { 00:30:21.204 "blocks": 18432, 00:30:21.204 "percent": 28 00:30:21.205 } 00:30:21.205 }, 00:30:21.205 "base_bdevs_list": [ 00:30:21.205 { 00:30:21.205 "name": "spare", 00:30:21.205 "uuid": "93b96063-65cf-5415-99cf-e654a2dd1771", 00:30:21.205 "is_configured": true, 00:30:21.205 "data_offset": 0, 00:30:21.205 "data_size": 65536 00:30:21.205 }, 00:30:21.205 { 00:30:21.205 "name": null, 00:30:21.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:21.205 "is_configured": false, 00:30:21.205 "data_offset": 0, 00:30:21.205 "data_size": 65536 00:30:21.205 }, 00:30:21.205 { 00:30:21.205 "name": "BaseBdev3", 00:30:21.205 "uuid": "8862396d-b3de-5902-8167-b3f217617548", 00:30:21.205 "is_configured": true, 00:30:21.205 "data_offset": 0, 00:30:21.205 "data_size": 65536 00:30:21.205 }, 00:30:21.205 { 00:30:21.205 "name": "BaseBdev4", 00:30:21.205 "uuid": "26afb4c4-deb1-50b0-b63b-9bf63b64300c", 00:30:21.205 "is_configured": true, 00:30:21.205 "data_offset": 0, 00:30:21.205 "data_size": 65536 00:30:21.205 } 00:30:21.205 ] 00:30:21.205 }' 00:30:21.205 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:21.205 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:21.205 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:21.205 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:21.205 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=369 00:30:21.205 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:21.205 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:21.205 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:21.205 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:21.205 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:21.205 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:21.205 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:21.205 15:59:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.205 15:59:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:21.205 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:21.205 15:59:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.205 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:21.205 "name": "raid_bdev1", 00:30:21.205 "uuid": "40324fe7-53dc-4ade-aef9-e6ed4afb138f", 00:30:21.205 "strip_size_kb": 0, 00:30:21.205 "state": "online", 00:30:21.205 "raid_level": "raid1", 00:30:21.205 "superblock": false, 00:30:21.205 "num_base_bdevs": 4, 00:30:21.205 "num_base_bdevs_discovered": 3, 00:30:21.205 "num_base_bdevs_operational": 3, 00:30:21.205 "process": { 00:30:21.205 "type": "rebuild", 00:30:21.205 "target": "spare", 00:30:21.205 "progress": { 00:30:21.205 "blocks": 18432, 00:30:21.205 "percent": 28 00:30:21.205 } 00:30:21.205 }, 00:30:21.205 "base_bdevs_list": [ 00:30:21.205 { 00:30:21.205 "name": "spare", 00:30:21.205 "uuid": "93b96063-65cf-5415-99cf-e654a2dd1771", 00:30:21.205 "is_configured": true, 00:30:21.205 "data_offset": 0, 00:30:21.205 "data_size": 65536 00:30:21.205 }, 00:30:21.205 { 00:30:21.205 "name": null, 00:30:21.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:21.205 "is_configured": false, 00:30:21.205 "data_offset": 0, 00:30:21.205 "data_size": 65536 00:30:21.205 }, 00:30:21.205 { 00:30:21.205 "name": "BaseBdev3", 00:30:21.205 "uuid": "8862396d-b3de-5902-8167-b3f217617548", 00:30:21.205 "is_configured": true, 00:30:21.205 "data_offset": 0, 00:30:21.205 "data_size": 65536 00:30:21.205 }, 00:30:21.205 { 00:30:21.205 "name": "BaseBdev4", 00:30:21.205 "uuid": "26afb4c4-deb1-50b0-b63b-9bf63b64300c", 00:30:21.205 "is_configured": true, 00:30:21.205 "data_offset": 0, 00:30:21.205 "data_size": 65536 00:30:21.205 } 00:30:21.205 ] 00:30:21.205 }' 00:30:21.205 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:21.205 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:21.205 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:21.205 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:21.205 15:59:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:30:21.464 [2024-11-05 15:59:53.783891] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:30:21.464 [2024-11-05 15:59:53.784172] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:30:21.722 [2024-11-05 15:59:53.992359] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:30:22.238 141.75 IOPS, 425.25 MiB/s [2024-11-05T15:59:54.653Z] 15:59:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:22.238 15:59:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:22.238 15:59:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:22.238 15:59:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:22.238 15:59:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:22.238 15:59:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:22.238 15:59:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:22.238 15:59:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:22.238 15:59:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.238 15:59:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:22.238 [2024-11-05 15:59:54.548091] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:30:22.238 [2024-11-05 15:59:54.548463] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:30:22.238 15:59:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.238 15:59:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:22.238 "name": "raid_bdev1", 00:30:22.238 "uuid": "40324fe7-53dc-4ade-aef9-e6ed4afb138f", 00:30:22.238 "strip_size_kb": 0, 00:30:22.238 "state": "online", 00:30:22.238 "raid_level": "raid1", 00:30:22.238 "superblock": false, 00:30:22.238 "num_base_bdevs": 4, 00:30:22.238 "num_base_bdevs_discovered": 3, 00:30:22.238 "num_base_bdevs_operational": 3, 00:30:22.238 "process": { 00:30:22.238 "type": "rebuild", 00:30:22.238 "target": "spare", 00:30:22.238 "progress": { 00:30:22.238 "blocks": 38912, 00:30:22.238 "percent": 59 00:30:22.238 } 00:30:22.238 }, 00:30:22.238 "base_bdevs_list": [ 00:30:22.238 { 00:30:22.238 "name": "spare", 00:30:22.238 "uuid": "93b96063-65cf-5415-99cf-e654a2dd1771", 00:30:22.238 "is_configured": true, 00:30:22.238 "data_offset": 0, 00:30:22.238 "data_size": 65536 00:30:22.238 }, 00:30:22.238 { 00:30:22.238 "name": null, 00:30:22.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:22.238 "is_configured": false, 00:30:22.238 "data_offset": 0, 00:30:22.238 "data_size": 65536 00:30:22.238 }, 00:30:22.238 { 00:30:22.238 "name": "BaseBdev3", 00:30:22.238 "uuid": "8862396d-b3de-5902-8167-b3f217617548", 00:30:22.238 "is_configured": true, 00:30:22.238 "data_offset": 0, 00:30:22.238 "data_size": 65536 00:30:22.238 }, 00:30:22.238 { 00:30:22.238 "name": "BaseBdev4", 00:30:22.238 "uuid": "26afb4c4-deb1-50b0-b63b-9bf63b64300c", 00:30:22.238 "is_configured": true, 00:30:22.238 "data_offset": 0, 00:30:22.238 "data_size": 65536 00:30:22.238 } 00:30:22.238 ] 00:30:22.238 }' 00:30:22.239 15:59:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:22.239 15:59:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:22.239 15:59:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:22.239 15:59:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:22.239 15:59:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:30:22.496 [2024-11-05 15:59:54.898322] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:30:22.754 [2024-11-05 15:59:55.105475] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:30:23.013 121.60 IOPS, 364.80 MiB/s [2024-11-05T15:59:55.428Z] [2024-11-05 15:59:55.331488] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:30:23.271 15:59:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:23.271 15:59:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:23.271 15:59:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:23.271 15:59:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:23.271 15:59:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:23.271 15:59:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:23.271 15:59:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:23.271 15:59:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:23.271 15:59:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.271 15:59:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:23.271 15:59:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.271 15:59:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:23.271 "name": "raid_bdev1", 00:30:23.271 "uuid": "40324fe7-53dc-4ade-aef9-e6ed4afb138f", 00:30:23.271 "strip_size_kb": 0, 00:30:23.271 "state": "online", 00:30:23.271 "raid_level": "raid1", 00:30:23.271 "superblock": false, 00:30:23.271 "num_base_bdevs": 4, 00:30:23.271 "num_base_bdevs_discovered": 3, 00:30:23.271 "num_base_bdevs_operational": 3, 00:30:23.271 "process": { 00:30:23.271 "type": "rebuild", 00:30:23.271 "target": "spare", 00:30:23.271 "progress": { 00:30:23.271 "blocks": 55296, 00:30:23.271 "percent": 84 00:30:23.271 } 00:30:23.271 }, 00:30:23.271 "base_bdevs_list": [ 00:30:23.271 { 00:30:23.271 "name": "spare", 00:30:23.271 "uuid": "93b96063-65cf-5415-99cf-e654a2dd1771", 00:30:23.271 "is_configured": true, 00:30:23.271 "data_offset": 0, 00:30:23.271 "data_size": 65536 00:30:23.271 }, 00:30:23.271 { 00:30:23.271 "name": null, 00:30:23.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:23.271 "is_configured": false, 00:30:23.271 "data_offset": 0, 00:30:23.271 "data_size": 65536 00:30:23.271 }, 00:30:23.271 { 00:30:23.271 "name": "BaseBdev3", 00:30:23.271 "uuid": "8862396d-b3de-5902-8167-b3f217617548", 00:30:23.271 "is_configured": true, 00:30:23.271 "data_offset": 0, 00:30:23.271 "data_size": 65536 00:30:23.271 }, 00:30:23.271 { 00:30:23.271 "name": "BaseBdev4", 00:30:23.271 "uuid": "26afb4c4-deb1-50b0-b63b-9bf63b64300c", 00:30:23.271 "is_configured": true, 00:30:23.271 "data_offset": 0, 00:30:23.271 "data_size": 65536 00:30:23.271 } 00:30:23.271 ] 00:30:23.271 }' 00:30:23.271 15:59:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:23.529 15:59:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:23.529 15:59:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:23.529 15:59:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:23.529 15:59:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:30:23.529 [2024-11-05 15:59:55.766432] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:30:23.529 [2024-11-05 15:59:55.766663] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:30:23.787 [2024-11-05 15:59:56.100162] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:24.046 109.00 IOPS, 327.00 MiB/s [2024-11-05T15:59:56.461Z] [2024-11-05 15:59:56.205160] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:24.046 [2024-11-05 15:59:56.207129] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:24.611 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:24.611 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:24.611 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:24.612 "name": "raid_bdev1", 00:30:24.612 "uuid": "40324fe7-53dc-4ade-aef9-e6ed4afb138f", 00:30:24.612 "strip_size_kb": 0, 00:30:24.612 "state": "online", 00:30:24.612 "raid_level": "raid1", 00:30:24.612 "superblock": false, 00:30:24.612 "num_base_bdevs": 4, 00:30:24.612 "num_base_bdevs_discovered": 3, 00:30:24.612 "num_base_bdevs_operational": 3, 00:30:24.612 "base_bdevs_list": [ 00:30:24.612 { 00:30:24.612 "name": "spare", 00:30:24.612 "uuid": "93b96063-65cf-5415-99cf-e654a2dd1771", 00:30:24.612 "is_configured": true, 00:30:24.612 "data_offset": 0, 00:30:24.612 "data_size": 65536 00:30:24.612 }, 00:30:24.612 { 00:30:24.612 "name": null, 00:30:24.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:24.612 "is_configured": false, 00:30:24.612 "data_offset": 0, 00:30:24.612 "data_size": 65536 00:30:24.612 }, 00:30:24.612 { 00:30:24.612 "name": "BaseBdev3", 00:30:24.612 "uuid": "8862396d-b3de-5902-8167-b3f217617548", 00:30:24.612 "is_configured": true, 00:30:24.612 "data_offset": 0, 00:30:24.612 "data_size": 65536 00:30:24.612 }, 00:30:24.612 { 00:30:24.612 "name": "BaseBdev4", 00:30:24.612 "uuid": "26afb4c4-deb1-50b0-b63b-9bf63b64300c", 00:30:24.612 "is_configured": true, 00:30:24.612 "data_offset": 0, 00:30:24.612 "data_size": 65536 00:30:24.612 } 00:30:24.612 ] 00:30:24.612 }' 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:24.612 "name": "raid_bdev1", 00:30:24.612 "uuid": "40324fe7-53dc-4ade-aef9-e6ed4afb138f", 00:30:24.612 "strip_size_kb": 0, 00:30:24.612 "state": "online", 00:30:24.612 "raid_level": "raid1", 00:30:24.612 "superblock": false, 00:30:24.612 "num_base_bdevs": 4, 00:30:24.612 "num_base_bdevs_discovered": 3, 00:30:24.612 "num_base_bdevs_operational": 3, 00:30:24.612 "base_bdevs_list": [ 00:30:24.612 { 00:30:24.612 "name": "spare", 00:30:24.612 "uuid": "93b96063-65cf-5415-99cf-e654a2dd1771", 00:30:24.612 "is_configured": true, 00:30:24.612 "data_offset": 0, 00:30:24.612 "data_size": 65536 00:30:24.612 }, 00:30:24.612 { 00:30:24.612 "name": null, 00:30:24.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:24.612 "is_configured": false, 00:30:24.612 "data_offset": 0, 00:30:24.612 "data_size": 65536 00:30:24.612 }, 00:30:24.612 { 00:30:24.612 "name": "BaseBdev3", 00:30:24.612 "uuid": "8862396d-b3de-5902-8167-b3f217617548", 00:30:24.612 "is_configured": true, 00:30:24.612 "data_offset": 0, 00:30:24.612 "data_size": 65536 00:30:24.612 }, 00:30:24.612 { 00:30:24.612 "name": "BaseBdev4", 00:30:24.612 "uuid": "26afb4c4-deb1-50b0-b63b-9bf63b64300c", 00:30:24.612 "is_configured": true, 00:30:24.612 "data_offset": 0, 00:30:24.612 "data_size": 65536 00:30:24.612 } 00:30:24.612 ] 00:30:24.612 }' 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:24.612 "name": "raid_bdev1", 00:30:24.612 "uuid": "40324fe7-53dc-4ade-aef9-e6ed4afb138f", 00:30:24.612 "strip_size_kb": 0, 00:30:24.612 "state": "online", 00:30:24.612 "raid_level": "raid1", 00:30:24.612 "superblock": false, 00:30:24.612 "num_base_bdevs": 4, 00:30:24.612 "num_base_bdevs_discovered": 3, 00:30:24.612 "num_base_bdevs_operational": 3, 00:30:24.612 "base_bdevs_list": [ 00:30:24.612 { 00:30:24.612 "name": "spare", 00:30:24.612 "uuid": "93b96063-65cf-5415-99cf-e654a2dd1771", 00:30:24.612 "is_configured": true, 00:30:24.612 "data_offset": 0, 00:30:24.612 "data_size": 65536 00:30:24.612 }, 00:30:24.612 { 00:30:24.612 "name": null, 00:30:24.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:24.612 "is_configured": false, 00:30:24.612 "data_offset": 0, 00:30:24.612 "data_size": 65536 00:30:24.612 }, 00:30:24.612 { 00:30:24.612 "name": "BaseBdev3", 00:30:24.612 "uuid": "8862396d-b3de-5902-8167-b3f217617548", 00:30:24.612 "is_configured": true, 00:30:24.612 "data_offset": 0, 00:30:24.612 "data_size": 65536 00:30:24.612 }, 00:30:24.612 { 00:30:24.612 "name": "BaseBdev4", 00:30:24.612 "uuid": "26afb4c4-deb1-50b0-b63b-9bf63b64300c", 00:30:24.612 "is_configured": true, 00:30:24.612 "data_offset": 0, 00:30:24.612 "data_size": 65536 00:30:24.612 } 00:30:24.612 ] 00:30:24.612 }' 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:24.612 15:59:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:24.870 98.43 IOPS, 295.29 MiB/s [2024-11-05T15:59:57.285Z] 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:24.870 15:59:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.870 15:59:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:24.870 [2024-11-05 15:59:57.271418] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:24.870 [2024-11-05 15:59:57.271442] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:25.128 00:30:25.128 Latency(us) 00:30:25.128 [2024-11-05T15:59:57.543Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:25.128 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:30:25.128 raid_bdev1 : 7.18 96.55 289.65 0.00 0.00 14613.46 263.09 108083.99 00:30:25.128 [2024-11-05T15:59:57.543Z] =================================================================================================================== 00:30:25.128 [2024-11-05T15:59:57.543Z] Total : 96.55 289.65 0.00 0.00 14613.46 263.09 108083.99 00:30:25.128 { 00:30:25.128 "results": [ 00:30:25.128 { 00:30:25.128 "job": "raid_bdev1", 00:30:25.128 "core_mask": "0x1", 00:30:25.128 "workload": "randrw", 00:30:25.128 "percentage": 50, 00:30:25.128 "status": "finished", 00:30:25.128 "queue_depth": 2, 00:30:25.128 "io_size": 3145728, 00:30:25.128 "runtime": 7.177636, 00:30:25.128 "iops": 96.54989470070647, 00:30:25.128 "mibps": 289.6496841021194, 00:30:25.128 "io_failed": 0, 00:30:25.128 "io_timeout": 0, 00:30:25.128 "avg_latency_us": 14613.456374736374, 00:30:25.128 "min_latency_us": 263.08923076923077, 00:30:25.128 "max_latency_us": 108083.9876923077 00:30:25.128 } 00:30:25.128 ], 00:30:25.128 "core_count": 1 00:30:25.128 } 00:30:25.128 [2024-11-05 15:59:57.383174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:25.128 [2024-11-05 15:59:57.383212] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:25.128 [2024-11-05 15:59:57.383296] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:25.128 [2024-11-05 15:59:57.383307] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:30:25.128 15:59:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:25.128 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:25.128 15:59:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.128 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:30:25.128 15:59:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:25.128 15:59:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:25.128 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:30:25.128 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:30:25.128 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:30:25.128 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:30:25.128 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:30:25.128 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:30:25.128 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:25.128 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:30:25.128 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:25.128 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:30:25.128 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:25.128 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:25.128 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:30:25.385 /dev/nbd0 00:30:25.385 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:25.385 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:25.385 15:59:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:30:25.385 15:59:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:30:25.385 15:59:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:30:25.385 15:59:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:30:25.385 15:59:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:30:25.385 15:59:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:30:25.385 15:59:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:30:25.385 15:59:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:30:25.385 15:59:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:25.385 1+0 records in 00:30:25.385 1+0 records out 00:30:25.385 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229033 s, 17.9 MB/s 00:30:25.385 15:59:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:25.385 15:59:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:30:25.385 15:59:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:25.385 15:59:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:30:25.385 15:59:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:30:25.385 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:25.385 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:25.385 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:30:25.385 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:30:25.385 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:30:25.385 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:30:25.385 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:30:25.385 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:30:25.385 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:30:25.385 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:30:25.385 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:25.385 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:30:25.385 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:25.385 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:30:25.385 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:25.385 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:25.385 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:30:25.643 /dev/nbd1 00:30:25.643 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:25.644 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:25.644 15:59:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:30:25.644 15:59:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:30:25.644 15:59:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:30:25.644 15:59:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:30:25.644 15:59:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:30:25.644 15:59:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:30:25.644 15:59:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:30:25.644 15:59:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:30:25.644 15:59:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:25.644 1+0 records in 00:30:25.644 1+0 records out 00:30:25.644 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028073 s, 14.6 MB/s 00:30:25.644 15:59:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:25.644 15:59:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:30:25.644 15:59:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:25.644 15:59:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:30:25.644 15:59:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:30:25.644 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:25.644 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:25.644 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:30:25.644 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:30:25.644 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:30:25.644 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:30:25.644 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:25.644 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:30:25.644 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:25.644 15:59:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:30:25.902 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:25.902 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:25.902 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:25.902 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:25.902 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:25.902 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:25.902 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:30:25.902 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:25.902 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:30:25.902 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:30:25.902 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:30:25.902 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:30:25.902 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:30:25.902 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:25.902 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:30:25.902 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:25.902 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:30:25.902 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:25.902 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:25.902 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:30:26.159 /dev/nbd1 00:30:26.159 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:26.159 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:26.159 15:59:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:30:26.159 15:59:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:30:26.159 15:59:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:30:26.159 15:59:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:30:26.159 15:59:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:30:26.159 15:59:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:30:26.159 15:59:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:30:26.159 15:59:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:30:26.159 15:59:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:26.159 1+0 records in 00:30:26.159 1+0 records out 00:30:26.159 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000175835 s, 23.3 MB/s 00:30:26.159 15:59:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:26.159 15:59:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:30:26.159 15:59:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:26.159 15:59:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:30:26.159 15:59:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:30:26.159 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:26.159 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:26.159 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:30:26.159 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:30:26.159 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:30:26.159 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:30:26.159 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:26.159 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:30:26.159 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:26.159 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:30:26.417 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:26.417 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:26.417 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:26.417 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:26.417 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:26.417 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:26.417 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:30:26.417 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:26.417 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:30:26.417 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:30:26.417 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:26.417 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:26.417 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:30:26.417 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:26.417 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:30:26.675 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:26.675 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:26.675 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:26.675 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:26.675 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:26.675 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:26.675 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:30:26.675 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:26.675 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:30:26.675 15:59:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76331 00:30:26.675 15:59:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 76331 ']' 00:30:26.675 15:59:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 76331 00:30:26.675 15:59:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:30:26.675 15:59:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:26.675 15:59:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76331 00:30:26.675 killing process with pid 76331 00:30:26.675 Received shutdown signal, test time was about 8.722982 seconds 00:30:26.675 00:30:26.675 Latency(us) 00:30:26.675 [2024-11-05T15:59:59.090Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:26.675 [2024-11-05T15:59:59.090Z] =================================================================================================================== 00:30:26.675 [2024-11-05T15:59:59.090Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:26.675 15:59:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:26.675 15:59:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:26.675 15:59:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76331' 00:30:26.675 15:59:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 76331 00:30:26.675 15:59:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 76331 00:30:26.675 [2024-11-05 15:59:58.916745] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:26.933 [2024-11-05 15:59:59.118647] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:30:27.498 00:30:27.498 real 0m11.012s 00:30:27.498 user 0m13.745s 00:30:27.498 sys 0m1.221s 00:30:27.498 ************************************ 00:30:27.498 END TEST raid_rebuild_test_io 00:30:27.498 ************************************ 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:27.498 15:59:59 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:30:27.498 15:59:59 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:30:27.498 15:59:59 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:27.498 15:59:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:27.498 ************************************ 00:30:27.498 START TEST raid_rebuild_test_sb_io 00:30:27.498 ************************************ 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true true true 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76719 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76719 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 76719 ']' 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:27.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:27.498 15:59:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:27.498 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:27.498 Zero copy mechanism will not be used. 00:30:27.498 [2024-11-05 15:59:59.808108] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:30:27.498 [2024-11-05 15:59:59.808203] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76719 ] 00:30:27.756 [2024-11-05 15:59:59.963227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:27.756 [2024-11-05 16:00:00.063640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:28.015 [2024-11-05 16:00:00.199391] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:28.015 [2024-11-05 16:00:00.199439] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:28.272 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:28.272 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:30:28.272 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:28.272 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:28.272 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.272 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:28.272 BaseBdev1_malloc 00:30:28.272 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.272 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:28.272 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.272 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:28.548 [2024-11-05 16:00:00.691461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:28.548 [2024-11-05 16:00:00.691526] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:28.548 [2024-11-05 16:00:00.691551] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:30:28.548 [2024-11-05 16:00:00.691564] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:28.548 [2024-11-05 16:00:00.693832] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:28.548 [2024-11-05 16:00:00.693879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:28.548 BaseBdev1 00:30:28.548 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.548 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:28.548 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:28.548 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.548 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:28.548 BaseBdev2_malloc 00:30:28.548 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.548 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:30:28.548 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.548 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:28.548 [2024-11-05 16:00:00.727211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:30:28.548 [2024-11-05 16:00:00.727262] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:28.548 [2024-11-05 16:00:00.727280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:30:28.548 [2024-11-05 16:00:00.727292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:28.548 [2024-11-05 16:00:00.729412] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:28.548 [2024-11-05 16:00:00.729443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:28.548 BaseBdev2 00:30:28.548 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.548 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:28.548 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:30:28.548 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.548 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:28.548 BaseBdev3_malloc 00:30:28.548 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.548 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:30:28.548 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.548 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:28.548 [2024-11-05 16:00:00.781004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:30:28.549 [2024-11-05 16:00:00.781054] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:28.549 [2024-11-05 16:00:00.781074] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:30:28.549 [2024-11-05 16:00:00.781085] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:28.549 [2024-11-05 16:00:00.783170] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:28.549 [2024-11-05 16:00:00.783204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:30:28.549 BaseBdev3 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:28.549 BaseBdev4_malloc 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:28.549 [2024-11-05 16:00:00.820601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:30:28.549 [2024-11-05 16:00:00.820648] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:28.549 [2024-11-05 16:00:00.820664] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:30:28.549 [2024-11-05 16:00:00.820674] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:28.549 [2024-11-05 16:00:00.822769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:28.549 [2024-11-05 16:00:00.822803] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:30:28.549 BaseBdev4 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:28.549 spare_malloc 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:28.549 spare_delay 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:28.549 [2024-11-05 16:00:00.864307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:28.549 [2024-11-05 16:00:00.864356] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:28.549 [2024-11-05 16:00:00.864373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:30:28.549 [2024-11-05 16:00:00.864384] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:28.549 [2024-11-05 16:00:00.866467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:28.549 [2024-11-05 16:00:00.866499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:28.549 spare 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:28.549 [2024-11-05 16:00:00.872370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:28.549 [2024-11-05 16:00:00.874208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:28.549 [2024-11-05 16:00:00.874275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:28.549 [2024-11-05 16:00:00.874331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:28.549 [2024-11-05 16:00:00.874531] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:30:28.549 [2024-11-05 16:00:00.874555] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:28.549 [2024-11-05 16:00:00.874857] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:30:28.549 [2024-11-05 16:00:00.875033] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:30:28.549 [2024-11-05 16:00:00.875054] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:30:28.549 [2024-11-05 16:00:00.875208] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.549 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:28.549 "name": "raid_bdev1", 00:30:28.549 "uuid": "e20fb631-da1a-4031-92d2-3072f7be3781", 00:30:28.549 "strip_size_kb": 0, 00:30:28.549 "state": "online", 00:30:28.549 "raid_level": "raid1", 00:30:28.549 "superblock": true, 00:30:28.549 "num_base_bdevs": 4, 00:30:28.549 "num_base_bdevs_discovered": 4, 00:30:28.549 "num_base_bdevs_operational": 4, 00:30:28.549 "base_bdevs_list": [ 00:30:28.549 { 00:30:28.549 "name": "BaseBdev1", 00:30:28.549 "uuid": "40cbcd39-faa4-5a8f-bfdc-eab009ef1de2", 00:30:28.549 "is_configured": true, 00:30:28.549 "data_offset": 2048, 00:30:28.550 "data_size": 63488 00:30:28.550 }, 00:30:28.550 { 00:30:28.550 "name": "BaseBdev2", 00:30:28.550 "uuid": "1d871a4e-e80a-5f52-8612-a7d79c9678e6", 00:30:28.550 "is_configured": true, 00:30:28.550 "data_offset": 2048, 00:30:28.550 "data_size": 63488 00:30:28.550 }, 00:30:28.550 { 00:30:28.550 "name": "BaseBdev3", 00:30:28.550 "uuid": "16fb4e84-c89d-5aa9-b0b9-74cba96863c5", 00:30:28.550 "is_configured": true, 00:30:28.550 "data_offset": 2048, 00:30:28.550 "data_size": 63488 00:30:28.550 }, 00:30:28.550 { 00:30:28.550 "name": "BaseBdev4", 00:30:28.550 "uuid": "89274539-64b6-5b25-b614-42b544075397", 00:30:28.550 "is_configured": true, 00:30:28.550 "data_offset": 2048, 00:30:28.550 "data_size": 63488 00:30:28.550 } 00:30:28.550 ] 00:30:28.550 }' 00:30:28.550 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:28.550 16:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:28.809 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:28.809 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.809 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:28.809 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:30:28.809 [2024-11-05 16:00:01.188777] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:28.809 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.809 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:30:28.809 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:30:28.809 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:28.809 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.809 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:29.067 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.067 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:30:29.067 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:30:29.067 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:30:29.067 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:30:29.067 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.067 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:29.067 [2024-11-05 16:00:01.260425] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:29.067 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.067 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:29.067 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:29.068 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:29.068 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:29.068 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:29.068 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:29.068 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:29.068 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:29.068 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:29.068 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:29.068 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:29.068 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:29.068 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.068 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:29.068 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.068 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:29.068 "name": "raid_bdev1", 00:30:29.068 "uuid": "e20fb631-da1a-4031-92d2-3072f7be3781", 00:30:29.068 "strip_size_kb": 0, 00:30:29.068 "state": "online", 00:30:29.068 "raid_level": "raid1", 00:30:29.068 "superblock": true, 00:30:29.068 "num_base_bdevs": 4, 00:30:29.068 "num_base_bdevs_discovered": 3, 00:30:29.068 "num_base_bdevs_operational": 3, 00:30:29.068 "base_bdevs_list": [ 00:30:29.068 { 00:30:29.068 "name": null, 00:30:29.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:29.068 "is_configured": false, 00:30:29.068 "data_offset": 0, 00:30:29.068 "data_size": 63488 00:30:29.068 }, 00:30:29.068 { 00:30:29.068 "name": "BaseBdev2", 00:30:29.068 "uuid": "1d871a4e-e80a-5f52-8612-a7d79c9678e6", 00:30:29.068 "is_configured": true, 00:30:29.068 "data_offset": 2048, 00:30:29.068 "data_size": 63488 00:30:29.068 }, 00:30:29.068 { 00:30:29.068 "name": "BaseBdev3", 00:30:29.068 "uuid": "16fb4e84-c89d-5aa9-b0b9-74cba96863c5", 00:30:29.068 "is_configured": true, 00:30:29.068 "data_offset": 2048, 00:30:29.068 "data_size": 63488 00:30:29.068 }, 00:30:29.068 { 00:30:29.068 "name": "BaseBdev4", 00:30:29.068 "uuid": "89274539-64b6-5b25-b614-42b544075397", 00:30:29.068 "is_configured": true, 00:30:29.068 "data_offset": 2048, 00:30:29.068 "data_size": 63488 00:30:29.068 } 00:30:29.068 ] 00:30:29.068 }' 00:30:29.068 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:29.068 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:29.068 [2024-11-05 16:00:01.341711] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:30:29.068 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:29.068 Zero copy mechanism will not be used. 00:30:29.068 Running I/O for 60 seconds... 00:30:29.325 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:30:29.325 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.325 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:29.325 [2024-11-05 16:00:01.570094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:29.325 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.325 16:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:30:29.325 [2024-11-05 16:00:01.658993] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:30:29.325 [2024-11-05 16:00:01.660960] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:29.583 [2024-11-05 16:00:01.801233] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:29.583 [2024-11-05 16:00:01.806928] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:29.841 [2024-11-05 16:00:02.042816] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:29.841 [2024-11-05 16:00:02.043014] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:30.099 [2024-11-05 16:00:02.267814] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:30.099 168.00 IOPS, 504.00 MiB/s [2024-11-05T16:00:02.514Z] [2024-11-05 16:00:02.490098] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:30.356 16:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:30.356 16:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:30.356 16:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:30.356 16:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:30.356 16:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:30.356 16:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:30.356 16:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:30.356 16:00:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.356 16:00:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:30.356 16:00:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.356 16:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:30.356 "name": "raid_bdev1", 00:30:30.356 "uuid": "e20fb631-da1a-4031-92d2-3072f7be3781", 00:30:30.356 "strip_size_kb": 0, 00:30:30.356 "state": "online", 00:30:30.356 "raid_level": "raid1", 00:30:30.356 "superblock": true, 00:30:30.356 "num_base_bdevs": 4, 00:30:30.356 "num_base_bdevs_discovered": 4, 00:30:30.356 "num_base_bdevs_operational": 4, 00:30:30.356 "process": { 00:30:30.356 "type": "rebuild", 00:30:30.356 "target": "spare", 00:30:30.356 "progress": { 00:30:30.356 "blocks": 10240, 00:30:30.356 "percent": 16 00:30:30.356 } 00:30:30.356 }, 00:30:30.356 "base_bdevs_list": [ 00:30:30.356 { 00:30:30.356 "name": "spare", 00:30:30.356 "uuid": "22e006d1-0ef4-50ea-a356-05be83e4126d", 00:30:30.356 "is_configured": true, 00:30:30.356 "data_offset": 2048, 00:30:30.356 "data_size": 63488 00:30:30.356 }, 00:30:30.356 { 00:30:30.357 "name": "BaseBdev2", 00:30:30.357 "uuid": "1d871a4e-e80a-5f52-8612-a7d79c9678e6", 00:30:30.357 "is_configured": true, 00:30:30.357 "data_offset": 2048, 00:30:30.357 "data_size": 63488 00:30:30.357 }, 00:30:30.357 { 00:30:30.357 "name": "BaseBdev3", 00:30:30.357 "uuid": "16fb4e84-c89d-5aa9-b0b9-74cba96863c5", 00:30:30.357 "is_configured": true, 00:30:30.357 "data_offset": 2048, 00:30:30.357 "data_size": 63488 00:30:30.357 }, 00:30:30.357 { 00:30:30.357 "name": "BaseBdev4", 00:30:30.357 "uuid": "89274539-64b6-5b25-b614-42b544075397", 00:30:30.357 "is_configured": true, 00:30:30.357 "data_offset": 2048, 00:30:30.357 "data_size": 63488 00:30:30.357 } 00:30:30.357 ] 00:30:30.357 }' 00:30:30.357 16:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:30.357 16:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:30.357 16:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:30.357 16:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:30.357 16:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:30:30.357 16:00:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.357 16:00:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:30.357 [2024-11-05 16:00:02.721678] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:30.615 [2024-11-05 16:00:02.823266] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:30:30.615 [2024-11-05 16:00:02.835222] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:30.615 [2024-11-05 16:00:02.843276] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:30.615 [2024-11-05 16:00:02.843313] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:30.615 [2024-11-05 16:00:02.843322] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:30.615 [2024-11-05 16:00:02.861511] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:30:30.615 16:00:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.615 16:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:30.615 16:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:30.615 16:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:30.615 16:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:30.615 16:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:30.615 16:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:30.615 16:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:30.615 16:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:30.615 16:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:30.615 16:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:30.615 16:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:30.615 16:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:30.615 16:00:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.615 16:00:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:30.615 16:00:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.615 16:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:30.615 "name": "raid_bdev1", 00:30:30.615 "uuid": "e20fb631-da1a-4031-92d2-3072f7be3781", 00:30:30.615 "strip_size_kb": 0, 00:30:30.615 "state": "online", 00:30:30.615 "raid_level": "raid1", 00:30:30.615 "superblock": true, 00:30:30.615 "num_base_bdevs": 4, 00:30:30.615 "num_base_bdevs_discovered": 3, 00:30:30.615 "num_base_bdevs_operational": 3, 00:30:30.615 "base_bdevs_list": [ 00:30:30.615 { 00:30:30.615 "name": null, 00:30:30.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:30.615 "is_configured": false, 00:30:30.615 "data_offset": 0, 00:30:30.615 "data_size": 63488 00:30:30.615 }, 00:30:30.615 { 00:30:30.615 "name": "BaseBdev2", 00:30:30.615 "uuid": "1d871a4e-e80a-5f52-8612-a7d79c9678e6", 00:30:30.615 "is_configured": true, 00:30:30.615 "data_offset": 2048, 00:30:30.615 "data_size": 63488 00:30:30.615 }, 00:30:30.615 { 00:30:30.615 "name": "BaseBdev3", 00:30:30.615 "uuid": "16fb4e84-c89d-5aa9-b0b9-74cba96863c5", 00:30:30.615 "is_configured": true, 00:30:30.615 "data_offset": 2048, 00:30:30.615 "data_size": 63488 00:30:30.615 }, 00:30:30.615 { 00:30:30.615 "name": "BaseBdev4", 00:30:30.615 "uuid": "89274539-64b6-5b25-b614-42b544075397", 00:30:30.615 "is_configured": true, 00:30:30.615 "data_offset": 2048, 00:30:30.615 "data_size": 63488 00:30:30.615 } 00:30:30.615 ] 00:30:30.615 }' 00:30:30.615 16:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:30.615 16:00:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:30.873 16:00:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:30.873 16:00:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:30.873 16:00:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:30.873 16:00:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:30.873 16:00:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:30.873 16:00:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:30.873 16:00:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.873 16:00:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:30.873 16:00:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:30.873 16:00:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.873 16:00:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:30.874 "name": "raid_bdev1", 00:30:30.874 "uuid": "e20fb631-da1a-4031-92d2-3072f7be3781", 00:30:30.874 "strip_size_kb": 0, 00:30:30.874 "state": "online", 00:30:30.874 "raid_level": "raid1", 00:30:30.874 "superblock": true, 00:30:30.874 "num_base_bdevs": 4, 00:30:30.874 "num_base_bdevs_discovered": 3, 00:30:30.874 "num_base_bdevs_operational": 3, 00:30:30.874 "base_bdevs_list": [ 00:30:30.874 { 00:30:30.874 "name": null, 00:30:30.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:30.874 "is_configured": false, 00:30:30.874 "data_offset": 0, 00:30:30.874 "data_size": 63488 00:30:30.874 }, 00:30:30.874 { 00:30:30.874 "name": "BaseBdev2", 00:30:30.874 "uuid": "1d871a4e-e80a-5f52-8612-a7d79c9678e6", 00:30:30.874 "is_configured": true, 00:30:30.874 "data_offset": 2048, 00:30:30.874 "data_size": 63488 00:30:30.874 }, 00:30:30.874 { 00:30:30.874 "name": "BaseBdev3", 00:30:30.874 "uuid": "16fb4e84-c89d-5aa9-b0b9-74cba96863c5", 00:30:30.874 "is_configured": true, 00:30:30.874 "data_offset": 2048, 00:30:30.874 "data_size": 63488 00:30:30.874 }, 00:30:30.874 { 00:30:30.874 "name": "BaseBdev4", 00:30:30.874 "uuid": "89274539-64b6-5b25-b614-42b544075397", 00:30:30.874 "is_configured": true, 00:30:30.874 "data_offset": 2048, 00:30:30.874 "data_size": 63488 00:30:30.874 } 00:30:30.874 ] 00:30:30.874 }' 00:30:30.874 16:00:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:30.874 16:00:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:30.874 16:00:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:31.131 16:00:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:31.131 16:00:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:30:31.131 16:00:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.131 16:00:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:31.131 [2024-11-05 16:00:03.296124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:31.131 16:00:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.131 16:00:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:30:31.131 [2024-11-05 16:00:03.353885] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:30:31.131 172.00 IOPS, 516.00 MiB/s [2024-11-05T16:00:03.546Z] [2024-11-05 16:00:03.355497] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:31.131 [2024-11-05 16:00:03.482812] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:31.131 [2024-11-05 16:00:03.483761] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:31.389 [2024-11-05 16:00:03.686881] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:31.389 [2024-11-05 16:00:03.687118] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:31.646 [2024-11-05 16:00:03.935285] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:31.904 [2024-11-05 16:00:04.155526] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:31.905 [2024-11-05 16:00:04.156084] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:32.163 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:32.163 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:32.163 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:32.163 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:32.163 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:32.163 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:32.163 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:32.163 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.163 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:32.163 131.33 IOPS, 394.00 MiB/s [2024-11-05T16:00:04.578Z] 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.163 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:32.163 "name": "raid_bdev1", 00:30:32.163 "uuid": "e20fb631-da1a-4031-92d2-3072f7be3781", 00:30:32.163 "strip_size_kb": 0, 00:30:32.163 "state": "online", 00:30:32.163 "raid_level": "raid1", 00:30:32.163 "superblock": true, 00:30:32.163 "num_base_bdevs": 4, 00:30:32.163 "num_base_bdevs_discovered": 4, 00:30:32.163 "num_base_bdevs_operational": 4, 00:30:32.163 "process": { 00:30:32.163 "type": "rebuild", 00:30:32.163 "target": "spare", 00:30:32.163 "progress": { 00:30:32.163 "blocks": 10240, 00:30:32.163 "percent": 16 00:30:32.163 } 00:30:32.163 }, 00:30:32.163 "base_bdevs_list": [ 00:30:32.163 { 00:30:32.163 "name": "spare", 00:30:32.163 "uuid": "22e006d1-0ef4-50ea-a356-05be83e4126d", 00:30:32.163 "is_configured": true, 00:30:32.163 "data_offset": 2048, 00:30:32.163 "data_size": 63488 00:30:32.163 }, 00:30:32.163 { 00:30:32.163 "name": "BaseBdev2", 00:30:32.163 "uuid": "1d871a4e-e80a-5f52-8612-a7d79c9678e6", 00:30:32.163 "is_configured": true, 00:30:32.163 "data_offset": 2048, 00:30:32.163 "data_size": 63488 00:30:32.163 }, 00:30:32.163 { 00:30:32.163 "name": "BaseBdev3", 00:30:32.163 "uuid": "16fb4e84-c89d-5aa9-b0b9-74cba96863c5", 00:30:32.163 "is_configured": true, 00:30:32.163 "data_offset": 2048, 00:30:32.163 "data_size": 63488 00:30:32.163 }, 00:30:32.163 { 00:30:32.163 "name": "BaseBdev4", 00:30:32.163 "uuid": "89274539-64b6-5b25-b614-42b544075397", 00:30:32.163 "is_configured": true, 00:30:32.163 "data_offset": 2048, 00:30:32.163 "data_size": 63488 00:30:32.163 } 00:30:32.163 ] 00:30:32.163 }' 00:30:32.163 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:32.163 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:32.163 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:32.163 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:32.163 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:30:32.163 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:30:32.163 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:30:32.163 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:30:32.163 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:30:32.163 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:30:32.163 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:30:32.163 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.163 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:32.163 [2024-11-05 16:00:04.434160] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:32.163 [2024-11-05 16:00:04.513948] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:30:32.163 [2024-11-05 16:00:04.514880] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:30:32.422 [2024-11-05 16:00:04.721976] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:30:32.422 [2024-11-05 16:00:04.722015] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:30:32.422 [2024-11-05 16:00:04.722620] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:30:32.422 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.422 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:30:32.422 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:30:32.422 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:32.422 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:32.422 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:32.422 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:32.423 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:32.423 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:32.423 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:32.423 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.423 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:32.423 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.423 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:32.423 "name": "raid_bdev1", 00:30:32.423 "uuid": "e20fb631-da1a-4031-92d2-3072f7be3781", 00:30:32.423 "strip_size_kb": 0, 00:30:32.423 "state": "online", 00:30:32.423 "raid_level": "raid1", 00:30:32.423 "superblock": true, 00:30:32.423 "num_base_bdevs": 4, 00:30:32.423 "num_base_bdevs_discovered": 3, 00:30:32.423 "num_base_bdevs_operational": 3, 00:30:32.423 "process": { 00:30:32.423 "type": "rebuild", 00:30:32.423 "target": "spare", 00:30:32.423 "progress": { 00:30:32.423 "blocks": 14336, 00:30:32.423 "percent": 22 00:30:32.423 } 00:30:32.423 }, 00:30:32.423 "base_bdevs_list": [ 00:30:32.423 { 00:30:32.423 "name": "spare", 00:30:32.423 "uuid": "22e006d1-0ef4-50ea-a356-05be83e4126d", 00:30:32.423 "is_configured": true, 00:30:32.423 "data_offset": 2048, 00:30:32.423 "data_size": 63488 00:30:32.423 }, 00:30:32.423 { 00:30:32.423 "name": null, 00:30:32.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:32.423 "is_configured": false, 00:30:32.423 "data_offset": 0, 00:30:32.423 "data_size": 63488 00:30:32.423 }, 00:30:32.423 { 00:30:32.423 "name": "BaseBdev3", 00:30:32.423 "uuid": "16fb4e84-c89d-5aa9-b0b9-74cba96863c5", 00:30:32.423 "is_configured": true, 00:30:32.423 "data_offset": 2048, 00:30:32.423 "data_size": 63488 00:30:32.423 }, 00:30:32.423 { 00:30:32.423 "name": "BaseBdev4", 00:30:32.423 "uuid": "89274539-64b6-5b25-b614-42b544075397", 00:30:32.423 "is_configured": true, 00:30:32.423 "data_offset": 2048, 00:30:32.423 "data_size": 63488 00:30:32.423 } 00:30:32.423 ] 00:30:32.423 }' 00:30:32.423 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:32.423 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:32.423 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:32.423 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:32.423 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=380 00:30:32.423 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:32.423 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:32.423 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:32.423 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:32.423 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:32.423 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:32.423 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:32.423 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:32.423 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.423 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:32.681 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.681 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:32.681 "name": "raid_bdev1", 00:30:32.681 "uuid": "e20fb631-da1a-4031-92d2-3072f7be3781", 00:30:32.681 "strip_size_kb": 0, 00:30:32.681 "state": "online", 00:30:32.681 "raid_level": "raid1", 00:30:32.681 "superblock": true, 00:30:32.681 "num_base_bdevs": 4, 00:30:32.681 "num_base_bdevs_discovered": 3, 00:30:32.681 "num_base_bdevs_operational": 3, 00:30:32.681 "process": { 00:30:32.681 "type": "rebuild", 00:30:32.681 "target": "spare", 00:30:32.681 "progress": { 00:30:32.682 "blocks": 14336, 00:30:32.682 "percent": 22 00:30:32.682 } 00:30:32.682 }, 00:30:32.682 "base_bdevs_list": [ 00:30:32.682 { 00:30:32.682 "name": "spare", 00:30:32.682 "uuid": "22e006d1-0ef4-50ea-a356-05be83e4126d", 00:30:32.682 "is_configured": true, 00:30:32.682 "data_offset": 2048, 00:30:32.682 "data_size": 63488 00:30:32.682 }, 00:30:32.682 { 00:30:32.682 "name": null, 00:30:32.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:32.682 "is_configured": false, 00:30:32.682 "data_offset": 0, 00:30:32.682 "data_size": 63488 00:30:32.682 }, 00:30:32.682 { 00:30:32.682 "name": "BaseBdev3", 00:30:32.682 "uuid": "16fb4e84-c89d-5aa9-b0b9-74cba96863c5", 00:30:32.682 "is_configured": true, 00:30:32.682 "data_offset": 2048, 00:30:32.682 "data_size": 63488 00:30:32.682 }, 00:30:32.682 { 00:30:32.682 "name": "BaseBdev4", 00:30:32.682 "uuid": "89274539-64b6-5b25-b614-42b544075397", 00:30:32.682 "is_configured": true, 00:30:32.682 "data_offset": 2048, 00:30:32.682 "data_size": 63488 00:30:32.682 } 00:30:32.682 ] 00:30:32.682 }' 00:30:32.682 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:32.682 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:32.682 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:32.682 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:32.682 16:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:30:32.682 [2024-11-05 16:00:04.937096] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:32.682 [2024-11-05 16:00:04.937481] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:33.505 107.25 IOPS, 321.75 MiB/s [2024-11-05T16:00:05.920Z] [2024-11-05 16:00:05.626866] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:30:33.505 [2024-11-05 16:00:05.842426] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:30:33.764 16:00:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:33.764 16:00:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:33.764 16:00:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:33.764 16:00:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:33.764 16:00:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:33.764 16:00:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:33.764 16:00:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:33.764 16:00:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.764 16:00:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:33.764 16:00:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:33.764 16:00:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.764 16:00:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:33.764 "name": "raid_bdev1", 00:30:33.764 "uuid": "e20fb631-da1a-4031-92d2-3072f7be3781", 00:30:33.764 "strip_size_kb": 0, 00:30:33.764 "state": "online", 00:30:33.764 "raid_level": "raid1", 00:30:33.764 "superblock": true, 00:30:33.764 "num_base_bdevs": 4, 00:30:33.764 "num_base_bdevs_discovered": 3, 00:30:33.764 "num_base_bdevs_operational": 3, 00:30:33.764 "process": { 00:30:33.764 "type": "rebuild", 00:30:33.764 "target": "spare", 00:30:33.764 "progress": { 00:30:33.764 "blocks": 28672, 00:30:33.764 "percent": 45 00:30:33.764 } 00:30:33.764 }, 00:30:33.764 "base_bdevs_list": [ 00:30:33.764 { 00:30:33.764 "name": "spare", 00:30:33.764 "uuid": "22e006d1-0ef4-50ea-a356-05be83e4126d", 00:30:33.764 "is_configured": true, 00:30:33.764 "data_offset": 2048, 00:30:33.764 "data_size": 63488 00:30:33.764 }, 00:30:33.764 { 00:30:33.764 "name": null, 00:30:33.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:33.764 "is_configured": false, 00:30:33.764 "data_offset": 0, 00:30:33.764 "data_size": 63488 00:30:33.764 }, 00:30:33.764 { 00:30:33.764 "name": "BaseBdev3", 00:30:33.764 "uuid": "16fb4e84-c89d-5aa9-b0b9-74cba96863c5", 00:30:33.764 "is_configured": true, 00:30:33.764 "data_offset": 2048, 00:30:33.764 "data_size": 63488 00:30:33.764 }, 00:30:33.764 { 00:30:33.764 "name": "BaseBdev4", 00:30:33.764 "uuid": "89274539-64b6-5b25-b614-42b544075397", 00:30:33.764 "is_configured": true, 00:30:33.764 "data_offset": 2048, 00:30:33.764 "data_size": 63488 00:30:33.764 } 00:30:33.764 ] 00:30:33.764 }' 00:30:33.764 16:00:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:33.764 16:00:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:33.764 16:00:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:33.764 16:00:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:33.764 16:00:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:30:34.022 97.60 IOPS, 292.80 MiB/s [2024-11-05T16:00:06.437Z] [2024-11-05 16:00:06.424354] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:30:34.588 [2024-11-05 16:00:06.861481] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:30:34.846 16:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:34.846 16:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:34.846 16:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:34.846 16:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:34.847 16:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:34.847 16:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:34.847 16:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:34.847 16:00:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.847 16:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:34.847 16:00:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:34.847 16:00:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.847 16:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:34.847 "name": "raid_bdev1", 00:30:34.847 "uuid": "e20fb631-da1a-4031-92d2-3072f7be3781", 00:30:34.847 "strip_size_kb": 0, 00:30:34.847 "state": "online", 00:30:34.847 "raid_level": "raid1", 00:30:34.847 "superblock": true, 00:30:34.847 "num_base_bdevs": 4, 00:30:34.847 "num_base_bdevs_discovered": 3, 00:30:34.847 "num_base_bdevs_operational": 3, 00:30:34.847 "process": { 00:30:34.847 "type": "rebuild", 00:30:34.847 "target": "spare", 00:30:34.847 "progress": { 00:30:34.847 "blocks": 49152, 00:30:34.847 "percent": 77 00:30:34.847 } 00:30:34.847 }, 00:30:34.847 "base_bdevs_list": [ 00:30:34.847 { 00:30:34.847 "name": "spare", 00:30:34.847 "uuid": "22e006d1-0ef4-50ea-a356-05be83e4126d", 00:30:34.847 "is_configured": true, 00:30:34.847 "data_offset": 2048, 00:30:34.847 "data_size": 63488 00:30:34.847 }, 00:30:34.847 { 00:30:34.847 "name": null, 00:30:34.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:34.847 "is_configured": false, 00:30:34.847 "data_offset": 0, 00:30:34.847 "data_size": 63488 00:30:34.847 }, 00:30:34.847 { 00:30:34.847 "name": "BaseBdev3", 00:30:34.847 "uuid": "16fb4e84-c89d-5aa9-b0b9-74cba96863c5", 00:30:34.847 "is_configured": true, 00:30:34.847 "data_offset": 2048, 00:30:34.847 "data_size": 63488 00:30:34.847 }, 00:30:34.847 { 00:30:34.847 "name": "BaseBdev4", 00:30:34.847 "uuid": "89274539-64b6-5b25-b614-42b544075397", 00:30:34.847 "is_configured": true, 00:30:34.847 "data_offset": 2048, 00:30:34.847 "data_size": 63488 00:30:34.847 } 00:30:34.847 ] 00:30:34.847 }' 00:30:34.847 16:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:34.847 [2024-11-05 16:00:07.079235] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:30:34.847 16:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:34.847 16:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:34.847 16:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:34.847 16:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:30:35.105 90.17 IOPS, 270.50 MiB/s [2024-11-05T16:00:07.520Z] [2024-11-05 16:00:07.490906] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:30:35.362 [2024-11-05 16:00:07.592974] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:30:35.620 [2024-11-05 16:00:07.920202] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:35.620 [2024-11-05 16:00:08.020203] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:35.620 [2024-11-05 16:00:08.022128] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:35.878 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:35.878 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:35.878 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:35.878 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:35.878 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:35.878 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:35.878 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:35.878 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:35.878 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.878 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:35.878 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.878 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:35.878 "name": "raid_bdev1", 00:30:35.878 "uuid": "e20fb631-da1a-4031-92d2-3072f7be3781", 00:30:35.878 "strip_size_kb": 0, 00:30:35.878 "state": "online", 00:30:35.878 "raid_level": "raid1", 00:30:35.878 "superblock": true, 00:30:35.878 "num_base_bdevs": 4, 00:30:35.878 "num_base_bdevs_discovered": 3, 00:30:35.878 "num_base_bdevs_operational": 3, 00:30:35.878 "base_bdevs_list": [ 00:30:35.878 { 00:30:35.878 "name": "spare", 00:30:35.878 "uuid": "22e006d1-0ef4-50ea-a356-05be83e4126d", 00:30:35.878 "is_configured": true, 00:30:35.878 "data_offset": 2048, 00:30:35.878 "data_size": 63488 00:30:35.878 }, 00:30:35.878 { 00:30:35.878 "name": null, 00:30:35.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:35.878 "is_configured": false, 00:30:35.878 "data_offset": 0, 00:30:35.878 "data_size": 63488 00:30:35.878 }, 00:30:35.878 { 00:30:35.878 "name": "BaseBdev3", 00:30:35.878 "uuid": "16fb4e84-c89d-5aa9-b0b9-74cba96863c5", 00:30:35.878 "is_configured": true, 00:30:35.878 "data_offset": 2048, 00:30:35.878 "data_size": 63488 00:30:35.878 }, 00:30:35.878 { 00:30:35.878 "name": "BaseBdev4", 00:30:35.878 "uuid": "89274539-64b6-5b25-b614-42b544075397", 00:30:35.878 "is_configured": true, 00:30:35.878 "data_offset": 2048, 00:30:35.878 "data_size": 63488 00:30:35.878 } 00:30:35.878 ] 00:30:35.878 }' 00:30:35.878 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:35.878 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:35.878 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:35.878 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:30:35.879 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:30:35.879 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:35.879 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:35.879 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:35.879 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:35.879 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:35.879 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:35.879 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.879 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:35.879 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:35.879 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.879 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:35.879 "name": "raid_bdev1", 00:30:35.879 "uuid": "e20fb631-da1a-4031-92d2-3072f7be3781", 00:30:35.879 "strip_size_kb": 0, 00:30:35.879 "state": "online", 00:30:35.879 "raid_level": "raid1", 00:30:35.879 "superblock": true, 00:30:35.879 "num_base_bdevs": 4, 00:30:35.879 "num_base_bdevs_discovered": 3, 00:30:35.879 "num_base_bdevs_operational": 3, 00:30:35.879 "base_bdevs_list": [ 00:30:35.879 { 00:30:35.879 "name": "spare", 00:30:35.879 "uuid": "22e006d1-0ef4-50ea-a356-05be83e4126d", 00:30:35.879 "is_configured": true, 00:30:35.879 "data_offset": 2048, 00:30:35.879 "data_size": 63488 00:30:35.879 }, 00:30:35.879 { 00:30:35.879 "name": null, 00:30:35.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:35.879 "is_configured": false, 00:30:35.879 "data_offset": 0, 00:30:35.879 "data_size": 63488 00:30:35.879 }, 00:30:35.879 { 00:30:35.879 "name": "BaseBdev3", 00:30:35.879 "uuid": "16fb4e84-c89d-5aa9-b0b9-74cba96863c5", 00:30:35.879 "is_configured": true, 00:30:35.879 "data_offset": 2048, 00:30:35.879 "data_size": 63488 00:30:35.879 }, 00:30:35.879 { 00:30:35.879 "name": "BaseBdev4", 00:30:35.879 "uuid": "89274539-64b6-5b25-b614-42b544075397", 00:30:35.879 "is_configured": true, 00:30:35.879 "data_offset": 2048, 00:30:35.879 "data_size": 63488 00:30:35.879 } 00:30:35.879 ] 00:30:35.879 }' 00:30:35.879 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:35.879 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:35.879 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:36.137 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:36.137 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:36.137 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:36.137 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:36.137 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:36.137 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:36.137 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:36.137 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:36.137 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:36.137 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:36.137 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:36.137 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:36.137 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:36.137 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.137 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:36.137 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.137 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:36.137 "name": "raid_bdev1", 00:30:36.137 "uuid": "e20fb631-da1a-4031-92d2-3072f7be3781", 00:30:36.137 "strip_size_kb": 0, 00:30:36.137 "state": "online", 00:30:36.137 "raid_level": "raid1", 00:30:36.137 "superblock": true, 00:30:36.137 "num_base_bdevs": 4, 00:30:36.137 "num_base_bdevs_discovered": 3, 00:30:36.137 "num_base_bdevs_operational": 3, 00:30:36.137 "base_bdevs_list": [ 00:30:36.137 { 00:30:36.137 "name": "spare", 00:30:36.137 "uuid": "22e006d1-0ef4-50ea-a356-05be83e4126d", 00:30:36.137 "is_configured": true, 00:30:36.137 "data_offset": 2048, 00:30:36.137 "data_size": 63488 00:30:36.137 }, 00:30:36.137 { 00:30:36.137 "name": null, 00:30:36.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:36.137 "is_configured": false, 00:30:36.137 "data_offset": 0, 00:30:36.137 "data_size": 63488 00:30:36.137 }, 00:30:36.137 { 00:30:36.137 "name": "BaseBdev3", 00:30:36.137 "uuid": "16fb4e84-c89d-5aa9-b0b9-74cba96863c5", 00:30:36.137 "is_configured": true, 00:30:36.137 "data_offset": 2048, 00:30:36.137 "data_size": 63488 00:30:36.137 }, 00:30:36.137 { 00:30:36.137 "name": "BaseBdev4", 00:30:36.137 "uuid": "89274539-64b6-5b25-b614-42b544075397", 00:30:36.137 "is_configured": true, 00:30:36.137 "data_offset": 2048, 00:30:36.137 "data_size": 63488 00:30:36.137 } 00:30:36.137 ] 00:30:36.137 }' 00:30:36.137 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:36.137 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:36.396 83.14 IOPS, 249.43 MiB/s [2024-11-05T16:00:08.811Z] 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:36.396 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.396 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:36.396 [2024-11-05 16:00:08.609599] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:36.396 [2024-11-05 16:00:08.609626] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:36.396 00:30:36.396 Latency(us) 00:30:36.396 [2024-11-05T16:00:08.811Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:36.396 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:30:36.396 raid_bdev1 : 7.32 81.54 244.61 0.00 0.00 16885.59 274.12 116149.96 00:30:36.396 [2024-11-05T16:00:08.811Z] =================================================================================================================== 00:30:36.396 [2024-11-05T16:00:08.811Z] Total : 81.54 244.61 0.00 0.00 16885.59 274.12 116149.96 00:30:36.396 [2024-11-05 16:00:08.676935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:36.396 [2024-11-05 16:00:08.676972] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:36.396 [2024-11-05 16:00:08.677053] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:36.396 [2024-11-05 16:00:08.677063] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:30:36.396 { 00:30:36.396 "results": [ 00:30:36.396 { 00:30:36.396 "job": "raid_bdev1", 00:30:36.396 "core_mask": "0x1", 00:30:36.396 "workload": "randrw", 00:30:36.396 "percentage": 50, 00:30:36.396 "status": "finished", 00:30:36.396 "queue_depth": 2, 00:30:36.396 "io_size": 3145728, 00:30:36.396 "runtime": 7.321792, 00:30:36.396 "iops": 81.53741597685375, 00:30:36.396 "mibps": 244.61224793056124, 00:30:36.396 "io_failed": 0, 00:30:36.396 "io_timeout": 0, 00:30:36.396 "avg_latency_us": 16885.585836876693, 00:30:36.396 "min_latency_us": 274.11692307692306, 00:30:36.396 "max_latency_us": 116149.95692307693 00:30:36.396 } 00:30:36.396 ], 00:30:36.396 "core_count": 1 00:30:36.396 } 00:30:36.396 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.396 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:30:36.396 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:36.396 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.396 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:36.396 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.396 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:30:36.396 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:30:36.396 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:30:36.396 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:30:36.396 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:30:36.396 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:30:36.396 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:36.396 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:30:36.396 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:36.396 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:30:36.396 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:36.396 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:36.396 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:30:36.654 /dev/nbd0 00:30:36.654 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:36.654 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:36.654 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:30:36.654 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:30:36.654 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:30:36.654 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:30:36.654 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:30:36.654 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:30:36.654 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:30:36.654 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:30:36.654 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:36.654 1+0 records in 00:30:36.654 1+0 records out 00:30:36.654 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268468 s, 15.3 MB/s 00:30:36.654 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:36.654 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:30:36.654 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:36.654 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:30:36.654 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:30:36.654 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:36.654 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:36.654 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:30:36.654 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:30:36.654 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:30:36.654 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:30:36.654 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:30:36.654 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:30:36.654 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:30:36.654 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:30:36.654 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:36.654 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:30:36.654 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:36.654 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:30:36.654 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:36.654 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:36.654 16:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:30:36.912 /dev/nbd1 00:30:36.912 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:36.912 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:36.912 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:30:36.912 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:30:36.912 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:30:36.912 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:30:36.912 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:30:36.912 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:30:36.912 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:30:36.912 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:30:36.912 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:36.912 1+0 records in 00:30:36.912 1+0 records out 00:30:36.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281775 s, 14.5 MB/s 00:30:36.912 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:36.912 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:30:36.912 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:36.912 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:30:36.912 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:30:36.912 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:36.912 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:36.912 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:30:36.912 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:30:36.912 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:30:36.912 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:30:36.912 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:36.912 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:30:36.912 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:36.912 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:30:37.170 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:37.170 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:37.170 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:37.170 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:37.170 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:37.170 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:37.170 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:30:37.170 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:37.170 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:30:37.170 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:30:37.170 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:30:37.170 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:30:37.170 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:30:37.170 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:37.170 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:30:37.170 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:37.170 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:30:37.170 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:37.170 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:37.170 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:30:37.428 /dev/nbd1 00:30:37.428 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:37.428 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:37.428 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:30:37.428 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:30:37.428 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:30:37.428 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:30:37.428 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:30:37.428 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:30:37.428 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:30:37.428 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:30:37.428 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:37.428 1+0 records in 00:30:37.428 1+0 records out 00:30:37.428 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306889 s, 13.3 MB/s 00:30:37.428 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:37.428 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:30:37.428 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:37.428 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:30:37.428 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:30:37.428 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:37.428 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:37.428 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:30:37.428 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:30:37.428 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:30:37.428 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:30:37.428 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:37.428 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:30:37.428 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:37.428 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:30:37.686 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:37.686 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:37.686 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:37.686 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:37.686 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:37.687 16:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:37.687 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:30:37.687 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:37.687 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:30:37.687 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:30:37.687 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:37.687 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:37.687 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:30:37.687 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:37.687 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:30:37.945 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:37.945 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:37.945 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:37.945 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:37.945 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:37.945 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:37.945 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:30:37.945 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:37.945 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:30:37.945 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:30:37.945 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.945 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:37.945 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.945 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:30:37.945 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.945 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:37.945 [2024-11-05 16:00:10.229404] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:37.945 [2024-11-05 16:00:10.229449] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:37.945 [2024-11-05 16:00:10.229466] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:30:37.945 [2024-11-05 16:00:10.229480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:37.945 [2024-11-05 16:00:10.231309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:37.945 [2024-11-05 16:00:10.231339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:37.945 [2024-11-05 16:00:10.231407] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:37.945 [2024-11-05 16:00:10.231448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:37.945 [2024-11-05 16:00:10.231551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:37.945 [2024-11-05 16:00:10.231636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:37.945 spare 00:30:37.945 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.945 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:30:37.945 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.945 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:37.945 [2024-11-05 16:00:10.331706] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:30:37.945 [2024-11-05 16:00:10.331739] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:37.945 [2024-11-05 16:00:10.331999] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:30:37.945 [2024-11-05 16:00:10.332135] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:30:37.945 [2024-11-05 16:00:10.332149] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:30:37.945 [2024-11-05 16:00:10.332285] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:37.945 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.945 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:37.946 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:37.946 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:37.946 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:37.946 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:37.946 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:37.946 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:37.946 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:37.946 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:37.946 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:37.946 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:37.946 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.946 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:37.946 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:37.946 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.204 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:38.204 "name": "raid_bdev1", 00:30:38.204 "uuid": "e20fb631-da1a-4031-92d2-3072f7be3781", 00:30:38.204 "strip_size_kb": 0, 00:30:38.204 "state": "online", 00:30:38.204 "raid_level": "raid1", 00:30:38.204 "superblock": true, 00:30:38.204 "num_base_bdevs": 4, 00:30:38.204 "num_base_bdevs_discovered": 3, 00:30:38.204 "num_base_bdevs_operational": 3, 00:30:38.204 "base_bdevs_list": [ 00:30:38.204 { 00:30:38.204 "name": "spare", 00:30:38.204 "uuid": "22e006d1-0ef4-50ea-a356-05be83e4126d", 00:30:38.204 "is_configured": true, 00:30:38.204 "data_offset": 2048, 00:30:38.204 "data_size": 63488 00:30:38.204 }, 00:30:38.204 { 00:30:38.204 "name": null, 00:30:38.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:38.204 "is_configured": false, 00:30:38.204 "data_offset": 2048, 00:30:38.204 "data_size": 63488 00:30:38.204 }, 00:30:38.204 { 00:30:38.204 "name": "BaseBdev3", 00:30:38.204 "uuid": "16fb4e84-c89d-5aa9-b0b9-74cba96863c5", 00:30:38.204 "is_configured": true, 00:30:38.204 "data_offset": 2048, 00:30:38.204 "data_size": 63488 00:30:38.204 }, 00:30:38.204 { 00:30:38.204 "name": "BaseBdev4", 00:30:38.204 "uuid": "89274539-64b6-5b25-b614-42b544075397", 00:30:38.204 "is_configured": true, 00:30:38.204 "data_offset": 2048, 00:30:38.204 "data_size": 63488 00:30:38.204 } 00:30:38.204 ] 00:30:38.204 }' 00:30:38.204 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:38.204 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:38.462 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:38.462 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:38.462 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:38.462 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:38.462 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:38.462 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:38.462 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.462 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:38.462 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:38.462 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.462 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:38.462 "name": "raid_bdev1", 00:30:38.462 "uuid": "e20fb631-da1a-4031-92d2-3072f7be3781", 00:30:38.462 "strip_size_kb": 0, 00:30:38.462 "state": "online", 00:30:38.462 "raid_level": "raid1", 00:30:38.462 "superblock": true, 00:30:38.462 "num_base_bdevs": 4, 00:30:38.462 "num_base_bdevs_discovered": 3, 00:30:38.462 "num_base_bdevs_operational": 3, 00:30:38.462 "base_bdevs_list": [ 00:30:38.462 { 00:30:38.462 "name": "spare", 00:30:38.462 "uuid": "22e006d1-0ef4-50ea-a356-05be83e4126d", 00:30:38.462 "is_configured": true, 00:30:38.462 "data_offset": 2048, 00:30:38.462 "data_size": 63488 00:30:38.462 }, 00:30:38.462 { 00:30:38.462 "name": null, 00:30:38.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:38.462 "is_configured": false, 00:30:38.462 "data_offset": 2048, 00:30:38.462 "data_size": 63488 00:30:38.462 }, 00:30:38.462 { 00:30:38.462 "name": "BaseBdev3", 00:30:38.462 "uuid": "16fb4e84-c89d-5aa9-b0b9-74cba96863c5", 00:30:38.462 "is_configured": true, 00:30:38.462 "data_offset": 2048, 00:30:38.462 "data_size": 63488 00:30:38.462 }, 00:30:38.462 { 00:30:38.462 "name": "BaseBdev4", 00:30:38.462 "uuid": "89274539-64b6-5b25-b614-42b544075397", 00:30:38.462 "is_configured": true, 00:30:38.462 "data_offset": 2048, 00:30:38.462 "data_size": 63488 00:30:38.462 } 00:30:38.462 ] 00:30:38.462 }' 00:30:38.462 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:38.462 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:38.462 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:38.462 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:38.462 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:38.462 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.462 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:38.462 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:30:38.462 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.462 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:30:38.462 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:30:38.462 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.462 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:38.462 [2024-11-05 16:00:10.785602] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:38.462 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.462 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:38.462 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:38.462 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:38.462 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:38.462 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:38.463 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:38.463 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:38.463 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:38.463 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:38.463 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:38.463 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:38.463 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.463 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:38.463 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:38.463 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.463 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:38.463 "name": "raid_bdev1", 00:30:38.463 "uuid": "e20fb631-da1a-4031-92d2-3072f7be3781", 00:30:38.463 "strip_size_kb": 0, 00:30:38.463 "state": "online", 00:30:38.463 "raid_level": "raid1", 00:30:38.463 "superblock": true, 00:30:38.463 "num_base_bdevs": 4, 00:30:38.463 "num_base_bdevs_discovered": 2, 00:30:38.463 "num_base_bdevs_operational": 2, 00:30:38.463 "base_bdevs_list": [ 00:30:38.463 { 00:30:38.463 "name": null, 00:30:38.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:38.463 "is_configured": false, 00:30:38.463 "data_offset": 0, 00:30:38.463 "data_size": 63488 00:30:38.463 }, 00:30:38.463 { 00:30:38.463 "name": null, 00:30:38.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:38.463 "is_configured": false, 00:30:38.463 "data_offset": 2048, 00:30:38.463 "data_size": 63488 00:30:38.463 }, 00:30:38.463 { 00:30:38.463 "name": "BaseBdev3", 00:30:38.463 "uuid": "16fb4e84-c89d-5aa9-b0b9-74cba96863c5", 00:30:38.463 "is_configured": true, 00:30:38.463 "data_offset": 2048, 00:30:38.463 "data_size": 63488 00:30:38.463 }, 00:30:38.463 { 00:30:38.463 "name": "BaseBdev4", 00:30:38.463 "uuid": "89274539-64b6-5b25-b614-42b544075397", 00:30:38.463 "is_configured": true, 00:30:38.463 "data_offset": 2048, 00:30:38.463 "data_size": 63488 00:30:38.463 } 00:30:38.463 ] 00:30:38.463 }' 00:30:38.463 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:38.463 16:00:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:38.720 16:00:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:30:38.720 16:00:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.720 16:00:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:38.720 [2024-11-05 16:00:11.109709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:38.720 [2024-11-05 16:00:11.109860] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:30:38.720 [2024-11-05 16:00:11.109873] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:38.720 [2024-11-05 16:00:11.109902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:38.720 [2024-11-05 16:00:11.117409] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:30:38.720 16:00:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.720 16:00:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:30:38.720 [2024-11-05 16:00:11.118983] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:40.094 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:40.094 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:40.094 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:40.094 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:40.094 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:40.094 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:40.094 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:40.094 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.094 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:40.094 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.094 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:40.094 "name": "raid_bdev1", 00:30:40.094 "uuid": "e20fb631-da1a-4031-92d2-3072f7be3781", 00:30:40.094 "strip_size_kb": 0, 00:30:40.094 "state": "online", 00:30:40.094 "raid_level": "raid1", 00:30:40.094 "superblock": true, 00:30:40.094 "num_base_bdevs": 4, 00:30:40.094 "num_base_bdevs_discovered": 3, 00:30:40.094 "num_base_bdevs_operational": 3, 00:30:40.094 "process": { 00:30:40.094 "type": "rebuild", 00:30:40.094 "target": "spare", 00:30:40.094 "progress": { 00:30:40.094 "blocks": 20480, 00:30:40.094 "percent": 32 00:30:40.094 } 00:30:40.094 }, 00:30:40.094 "base_bdevs_list": [ 00:30:40.094 { 00:30:40.094 "name": "spare", 00:30:40.094 "uuid": "22e006d1-0ef4-50ea-a356-05be83e4126d", 00:30:40.094 "is_configured": true, 00:30:40.094 "data_offset": 2048, 00:30:40.094 "data_size": 63488 00:30:40.094 }, 00:30:40.094 { 00:30:40.094 "name": null, 00:30:40.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:40.094 "is_configured": false, 00:30:40.094 "data_offset": 2048, 00:30:40.094 "data_size": 63488 00:30:40.094 }, 00:30:40.094 { 00:30:40.094 "name": "BaseBdev3", 00:30:40.094 "uuid": "16fb4e84-c89d-5aa9-b0b9-74cba96863c5", 00:30:40.094 "is_configured": true, 00:30:40.094 "data_offset": 2048, 00:30:40.094 "data_size": 63488 00:30:40.094 }, 00:30:40.094 { 00:30:40.094 "name": "BaseBdev4", 00:30:40.094 "uuid": "89274539-64b6-5b25-b614-42b544075397", 00:30:40.094 "is_configured": true, 00:30:40.094 "data_offset": 2048, 00:30:40.094 "data_size": 63488 00:30:40.094 } 00:30:40.094 ] 00:30:40.094 }' 00:30:40.094 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:40.094 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:40.094 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:40.094 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:40.094 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:30:40.094 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.094 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:40.094 [2024-11-05 16:00:12.221191] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:40.094 [2024-11-05 16:00:12.223691] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:40.094 [2024-11-05 16:00:12.223742] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:40.094 [2024-11-05 16:00:12.223755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:40.094 [2024-11-05 16:00:12.223762] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:40.094 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.094 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:40.094 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:40.094 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:40.094 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:40.094 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:40.094 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:40.094 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:40.094 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:40.094 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:40.094 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:40.094 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:40.094 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.094 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:40.094 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:40.094 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.094 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:40.094 "name": "raid_bdev1", 00:30:40.094 "uuid": "e20fb631-da1a-4031-92d2-3072f7be3781", 00:30:40.094 "strip_size_kb": 0, 00:30:40.094 "state": "online", 00:30:40.094 "raid_level": "raid1", 00:30:40.094 "superblock": true, 00:30:40.094 "num_base_bdevs": 4, 00:30:40.094 "num_base_bdevs_discovered": 2, 00:30:40.094 "num_base_bdevs_operational": 2, 00:30:40.094 "base_bdevs_list": [ 00:30:40.094 { 00:30:40.094 "name": null, 00:30:40.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:40.094 "is_configured": false, 00:30:40.094 "data_offset": 0, 00:30:40.094 "data_size": 63488 00:30:40.094 }, 00:30:40.094 { 00:30:40.094 "name": null, 00:30:40.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:40.094 "is_configured": false, 00:30:40.094 "data_offset": 2048, 00:30:40.094 "data_size": 63488 00:30:40.094 }, 00:30:40.094 { 00:30:40.094 "name": "BaseBdev3", 00:30:40.094 "uuid": "16fb4e84-c89d-5aa9-b0b9-74cba96863c5", 00:30:40.094 "is_configured": true, 00:30:40.094 "data_offset": 2048, 00:30:40.094 "data_size": 63488 00:30:40.094 }, 00:30:40.094 { 00:30:40.094 "name": "BaseBdev4", 00:30:40.094 "uuid": "89274539-64b6-5b25-b614-42b544075397", 00:30:40.094 "is_configured": true, 00:30:40.094 "data_offset": 2048, 00:30:40.094 "data_size": 63488 00:30:40.094 } 00:30:40.095 ] 00:30:40.095 }' 00:30:40.095 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:40.095 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:40.352 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:30:40.352 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.352 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:40.352 [2024-11-05 16:00:12.536430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:40.352 [2024-11-05 16:00:12.536479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:40.352 [2024-11-05 16:00:12.536497] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:30:40.352 [2024-11-05 16:00:12.536505] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:40.352 [2024-11-05 16:00:12.536873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:40.352 [2024-11-05 16:00:12.536895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:40.352 [2024-11-05 16:00:12.536967] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:40.352 [2024-11-05 16:00:12.536979] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:30:40.352 [2024-11-05 16:00:12.536986] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:40.352 [2024-11-05 16:00:12.537006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:40.352 [2024-11-05 16:00:12.544602] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:30:40.352 spare 00:30:40.352 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.352 16:00:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:30:40.352 [2024-11-05 16:00:12.546093] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:41.286 16:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:41.286 16:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:41.286 16:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:41.286 16:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:41.286 16:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:41.286 16:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:41.286 16:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:41.287 16:00:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.287 16:00:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:41.287 16:00:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.287 16:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:41.287 "name": "raid_bdev1", 00:30:41.287 "uuid": "e20fb631-da1a-4031-92d2-3072f7be3781", 00:30:41.287 "strip_size_kb": 0, 00:30:41.287 "state": "online", 00:30:41.287 "raid_level": "raid1", 00:30:41.287 "superblock": true, 00:30:41.287 "num_base_bdevs": 4, 00:30:41.287 "num_base_bdevs_discovered": 3, 00:30:41.287 "num_base_bdevs_operational": 3, 00:30:41.287 "process": { 00:30:41.287 "type": "rebuild", 00:30:41.287 "target": "spare", 00:30:41.287 "progress": { 00:30:41.287 "blocks": 20480, 00:30:41.287 "percent": 32 00:30:41.287 } 00:30:41.287 }, 00:30:41.287 "base_bdevs_list": [ 00:30:41.287 { 00:30:41.287 "name": "spare", 00:30:41.287 "uuid": "22e006d1-0ef4-50ea-a356-05be83e4126d", 00:30:41.287 "is_configured": true, 00:30:41.287 "data_offset": 2048, 00:30:41.287 "data_size": 63488 00:30:41.287 }, 00:30:41.287 { 00:30:41.287 "name": null, 00:30:41.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:41.287 "is_configured": false, 00:30:41.287 "data_offset": 2048, 00:30:41.287 "data_size": 63488 00:30:41.287 }, 00:30:41.287 { 00:30:41.287 "name": "BaseBdev3", 00:30:41.287 "uuid": "16fb4e84-c89d-5aa9-b0b9-74cba96863c5", 00:30:41.287 "is_configured": true, 00:30:41.287 "data_offset": 2048, 00:30:41.287 "data_size": 63488 00:30:41.287 }, 00:30:41.287 { 00:30:41.287 "name": "BaseBdev4", 00:30:41.287 "uuid": "89274539-64b6-5b25-b614-42b544075397", 00:30:41.287 "is_configured": true, 00:30:41.287 "data_offset": 2048, 00:30:41.287 "data_size": 63488 00:30:41.287 } 00:30:41.287 ] 00:30:41.287 }' 00:30:41.287 16:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:41.287 16:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:41.287 16:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:41.287 16:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:41.287 16:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:30:41.287 16:00:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.287 16:00:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:41.287 [2024-11-05 16:00:13.652654] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:41.545 [2024-11-05 16:00:13.751203] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:41.545 [2024-11-05 16:00:13.751259] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:41.545 [2024-11-05 16:00:13.751273] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:41.545 [2024-11-05 16:00:13.751280] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:41.545 16:00:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.545 16:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:41.545 16:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:41.545 16:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:41.545 16:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:41.545 16:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:41.545 16:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:41.545 16:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:41.545 16:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:41.545 16:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:41.545 16:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:41.545 16:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:41.545 16:00:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.545 16:00:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:41.545 16:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:41.545 16:00:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.545 16:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:41.545 "name": "raid_bdev1", 00:30:41.545 "uuid": "e20fb631-da1a-4031-92d2-3072f7be3781", 00:30:41.545 "strip_size_kb": 0, 00:30:41.545 "state": "online", 00:30:41.545 "raid_level": "raid1", 00:30:41.545 "superblock": true, 00:30:41.545 "num_base_bdevs": 4, 00:30:41.546 "num_base_bdevs_discovered": 2, 00:30:41.546 "num_base_bdevs_operational": 2, 00:30:41.546 "base_bdevs_list": [ 00:30:41.546 { 00:30:41.546 "name": null, 00:30:41.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:41.546 "is_configured": false, 00:30:41.546 "data_offset": 0, 00:30:41.546 "data_size": 63488 00:30:41.546 }, 00:30:41.546 { 00:30:41.546 "name": null, 00:30:41.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:41.546 "is_configured": false, 00:30:41.546 "data_offset": 2048, 00:30:41.546 "data_size": 63488 00:30:41.546 }, 00:30:41.546 { 00:30:41.546 "name": "BaseBdev3", 00:30:41.546 "uuid": "16fb4e84-c89d-5aa9-b0b9-74cba96863c5", 00:30:41.546 "is_configured": true, 00:30:41.546 "data_offset": 2048, 00:30:41.546 "data_size": 63488 00:30:41.546 }, 00:30:41.546 { 00:30:41.546 "name": "BaseBdev4", 00:30:41.546 "uuid": "89274539-64b6-5b25-b614-42b544075397", 00:30:41.546 "is_configured": true, 00:30:41.546 "data_offset": 2048, 00:30:41.546 "data_size": 63488 00:30:41.546 } 00:30:41.546 ] 00:30:41.546 }' 00:30:41.546 16:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:41.546 16:00:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:41.804 16:00:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:41.804 16:00:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:41.804 16:00:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:41.804 16:00:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:41.804 16:00:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:41.804 16:00:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:41.804 16:00:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.804 16:00:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:41.804 16:00:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:41.804 16:00:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.804 16:00:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:41.804 "name": "raid_bdev1", 00:30:41.804 "uuid": "e20fb631-da1a-4031-92d2-3072f7be3781", 00:30:41.804 "strip_size_kb": 0, 00:30:41.804 "state": "online", 00:30:41.804 "raid_level": "raid1", 00:30:41.804 "superblock": true, 00:30:41.804 "num_base_bdevs": 4, 00:30:41.804 "num_base_bdevs_discovered": 2, 00:30:41.804 "num_base_bdevs_operational": 2, 00:30:41.804 "base_bdevs_list": [ 00:30:41.804 { 00:30:41.804 "name": null, 00:30:41.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:41.804 "is_configured": false, 00:30:41.804 "data_offset": 0, 00:30:41.804 "data_size": 63488 00:30:41.804 }, 00:30:41.804 { 00:30:41.804 "name": null, 00:30:41.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:41.804 "is_configured": false, 00:30:41.804 "data_offset": 2048, 00:30:41.804 "data_size": 63488 00:30:41.804 }, 00:30:41.804 { 00:30:41.804 "name": "BaseBdev3", 00:30:41.804 "uuid": "16fb4e84-c89d-5aa9-b0b9-74cba96863c5", 00:30:41.804 "is_configured": true, 00:30:41.804 "data_offset": 2048, 00:30:41.804 "data_size": 63488 00:30:41.804 }, 00:30:41.804 { 00:30:41.804 "name": "BaseBdev4", 00:30:41.804 "uuid": "89274539-64b6-5b25-b614-42b544075397", 00:30:41.804 "is_configured": true, 00:30:41.804 "data_offset": 2048, 00:30:41.804 "data_size": 63488 00:30:41.804 } 00:30:41.804 ] 00:30:41.804 }' 00:30:41.804 16:00:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:41.804 16:00:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:41.804 16:00:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:41.804 16:00:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:41.804 16:00:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:30:41.804 16:00:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.804 16:00:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:41.804 16:00:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.804 16:00:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:41.804 16:00:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.804 16:00:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:41.804 [2024-11-05 16:00:14.172526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:41.805 [2024-11-05 16:00:14.172568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:41.805 [2024-11-05 16:00:14.172584] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:30:41.805 [2024-11-05 16:00:14.172592] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:41.805 [2024-11-05 16:00:14.172933] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:41.805 [2024-11-05 16:00:14.172945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:41.805 [2024-11-05 16:00:14.173003] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:30:41.805 [2024-11-05 16:00:14.173017] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:30:41.805 [2024-11-05 16:00:14.173025] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:41.805 [2024-11-05 16:00:14.173032] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:30:41.805 BaseBdev1 00:30:41.805 16:00:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.805 16:00:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:43.178 "name": "raid_bdev1", 00:30:43.178 "uuid": "e20fb631-da1a-4031-92d2-3072f7be3781", 00:30:43.178 "strip_size_kb": 0, 00:30:43.178 "state": "online", 00:30:43.178 "raid_level": "raid1", 00:30:43.178 "superblock": true, 00:30:43.178 "num_base_bdevs": 4, 00:30:43.178 "num_base_bdevs_discovered": 2, 00:30:43.178 "num_base_bdevs_operational": 2, 00:30:43.178 "base_bdevs_list": [ 00:30:43.178 { 00:30:43.178 "name": null, 00:30:43.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:43.178 "is_configured": false, 00:30:43.178 "data_offset": 0, 00:30:43.178 "data_size": 63488 00:30:43.178 }, 00:30:43.178 { 00:30:43.178 "name": null, 00:30:43.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:43.178 "is_configured": false, 00:30:43.178 "data_offset": 2048, 00:30:43.178 "data_size": 63488 00:30:43.178 }, 00:30:43.178 { 00:30:43.178 "name": "BaseBdev3", 00:30:43.178 "uuid": "16fb4e84-c89d-5aa9-b0b9-74cba96863c5", 00:30:43.178 "is_configured": true, 00:30:43.178 "data_offset": 2048, 00:30:43.178 "data_size": 63488 00:30:43.178 }, 00:30:43.178 { 00:30:43.178 "name": "BaseBdev4", 00:30:43.178 "uuid": "89274539-64b6-5b25-b614-42b544075397", 00:30:43.178 "is_configured": true, 00:30:43.178 "data_offset": 2048, 00:30:43.178 "data_size": 63488 00:30:43.178 } 00:30:43.178 ] 00:30:43.178 }' 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:43.178 "name": "raid_bdev1", 00:30:43.178 "uuid": "e20fb631-da1a-4031-92d2-3072f7be3781", 00:30:43.178 "strip_size_kb": 0, 00:30:43.178 "state": "online", 00:30:43.178 "raid_level": "raid1", 00:30:43.178 "superblock": true, 00:30:43.178 "num_base_bdevs": 4, 00:30:43.178 "num_base_bdevs_discovered": 2, 00:30:43.178 "num_base_bdevs_operational": 2, 00:30:43.178 "base_bdevs_list": [ 00:30:43.178 { 00:30:43.178 "name": null, 00:30:43.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:43.178 "is_configured": false, 00:30:43.178 "data_offset": 0, 00:30:43.178 "data_size": 63488 00:30:43.178 }, 00:30:43.178 { 00:30:43.178 "name": null, 00:30:43.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:43.178 "is_configured": false, 00:30:43.178 "data_offset": 2048, 00:30:43.178 "data_size": 63488 00:30:43.178 }, 00:30:43.178 { 00:30:43.178 "name": "BaseBdev3", 00:30:43.178 "uuid": "16fb4e84-c89d-5aa9-b0b9-74cba96863c5", 00:30:43.178 "is_configured": true, 00:30:43.178 "data_offset": 2048, 00:30:43.178 "data_size": 63488 00:30:43.178 }, 00:30:43.178 { 00:30:43.178 "name": "BaseBdev4", 00:30:43.178 "uuid": "89274539-64b6-5b25-b614-42b544075397", 00:30:43.178 "is_configured": true, 00:30:43.178 "data_offset": 2048, 00:30:43.178 "data_size": 63488 00:30:43.178 } 00:30:43.178 ] 00:30:43.178 }' 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:43.178 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.179 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:43.179 [2024-11-05 16:00:15.592979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:43.179 [2024-11-05 16:00:15.593115] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:30:43.179 [2024-11-05 16:00:15.593139] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:43.436 request: 00:30:43.436 { 00:30:43.436 "base_bdev": "BaseBdev1", 00:30:43.436 "raid_bdev": "raid_bdev1", 00:30:43.436 "method": "bdev_raid_add_base_bdev", 00:30:43.436 "req_id": 1 00:30:43.436 } 00:30:43.436 Got JSON-RPC error response 00:30:43.436 response: 00:30:43.436 { 00:30:43.436 "code": -22, 00:30:43.436 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:30:43.436 } 00:30:43.436 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:43.436 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:30:43.436 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:43.436 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:43.436 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:43.436 16:00:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:30:44.371 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:44.371 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:44.371 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:44.371 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:44.371 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:44.371 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:44.371 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:44.371 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:44.371 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:44.371 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:44.371 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:44.371 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:44.371 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.371 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:44.371 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.371 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:44.371 "name": "raid_bdev1", 00:30:44.371 "uuid": "e20fb631-da1a-4031-92d2-3072f7be3781", 00:30:44.371 "strip_size_kb": 0, 00:30:44.371 "state": "online", 00:30:44.371 "raid_level": "raid1", 00:30:44.371 "superblock": true, 00:30:44.371 "num_base_bdevs": 4, 00:30:44.371 "num_base_bdevs_discovered": 2, 00:30:44.371 "num_base_bdevs_operational": 2, 00:30:44.371 "base_bdevs_list": [ 00:30:44.371 { 00:30:44.371 "name": null, 00:30:44.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:44.371 "is_configured": false, 00:30:44.371 "data_offset": 0, 00:30:44.371 "data_size": 63488 00:30:44.371 }, 00:30:44.371 { 00:30:44.371 "name": null, 00:30:44.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:44.371 "is_configured": false, 00:30:44.371 "data_offset": 2048, 00:30:44.371 "data_size": 63488 00:30:44.371 }, 00:30:44.371 { 00:30:44.371 "name": "BaseBdev3", 00:30:44.371 "uuid": "16fb4e84-c89d-5aa9-b0b9-74cba96863c5", 00:30:44.371 "is_configured": true, 00:30:44.371 "data_offset": 2048, 00:30:44.371 "data_size": 63488 00:30:44.371 }, 00:30:44.371 { 00:30:44.371 "name": "BaseBdev4", 00:30:44.371 "uuid": "89274539-64b6-5b25-b614-42b544075397", 00:30:44.371 "is_configured": true, 00:30:44.371 "data_offset": 2048, 00:30:44.371 "data_size": 63488 00:30:44.371 } 00:30:44.371 ] 00:30:44.371 }' 00:30:44.371 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:44.371 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:44.630 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:44.630 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:44.630 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:44.630 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:44.630 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:44.630 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:44.630 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:44.630 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.630 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:44.630 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.630 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:44.630 "name": "raid_bdev1", 00:30:44.630 "uuid": "e20fb631-da1a-4031-92d2-3072f7be3781", 00:30:44.630 "strip_size_kb": 0, 00:30:44.630 "state": "online", 00:30:44.630 "raid_level": "raid1", 00:30:44.630 "superblock": true, 00:30:44.630 "num_base_bdevs": 4, 00:30:44.630 "num_base_bdevs_discovered": 2, 00:30:44.630 "num_base_bdevs_operational": 2, 00:30:44.630 "base_bdevs_list": [ 00:30:44.630 { 00:30:44.630 "name": null, 00:30:44.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:44.630 "is_configured": false, 00:30:44.630 "data_offset": 0, 00:30:44.630 "data_size": 63488 00:30:44.630 }, 00:30:44.630 { 00:30:44.630 "name": null, 00:30:44.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:44.630 "is_configured": false, 00:30:44.630 "data_offset": 2048, 00:30:44.630 "data_size": 63488 00:30:44.630 }, 00:30:44.630 { 00:30:44.630 "name": "BaseBdev3", 00:30:44.630 "uuid": "16fb4e84-c89d-5aa9-b0b9-74cba96863c5", 00:30:44.630 "is_configured": true, 00:30:44.630 "data_offset": 2048, 00:30:44.630 "data_size": 63488 00:30:44.630 }, 00:30:44.630 { 00:30:44.630 "name": "BaseBdev4", 00:30:44.630 "uuid": "89274539-64b6-5b25-b614-42b544075397", 00:30:44.630 "is_configured": true, 00:30:44.630 "data_offset": 2048, 00:30:44.630 "data_size": 63488 00:30:44.630 } 00:30:44.630 ] 00:30:44.630 }' 00:30:44.630 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:44.630 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:44.630 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:44.630 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:44.630 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76719 00:30:44.630 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 76719 ']' 00:30:44.630 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 76719 00:30:44.630 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:30:44.630 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:44.630 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76719 00:30:44.630 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:44.630 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:44.630 killing process with pid 76719 00:30:44.630 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76719' 00:30:44.630 Received shutdown signal, test time was about 15.654025 seconds 00:30:44.630 00:30:44.630 Latency(us) 00:30:44.630 [2024-11-05T16:00:17.045Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.630 [2024-11-05T16:00:17.045Z] =================================================================================================================== 00:30:44.630 [2024-11-05T16:00:17.045Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:44.630 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 76719 00:30:44.630 [2024-11-05 16:00:16.997810] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:44.630 [2024-11-05 16:00:16.997918] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:44.630 16:00:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 76719 00:30:44.630 [2024-11-05 16:00:16.997977] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:44.630 [2024-11-05 16:00:16.997987] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:30:44.888 [2024-11-05 16:00:17.198888] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:45.453 16:00:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:30:45.453 00:30:45.453 real 0m18.028s 00:30:45.453 user 0m22.868s 00:30:45.453 sys 0m1.752s 00:30:45.453 16:00:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:45.453 16:00:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:45.453 ************************************ 00:30:45.453 END TEST raid_rebuild_test_sb_io 00:30:45.453 ************************************ 00:30:45.453 16:00:17 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:30:45.453 16:00:17 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:30:45.453 16:00:17 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:30:45.453 16:00:17 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:45.453 16:00:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:45.453 ************************************ 00:30:45.453 START TEST raid5f_state_function_test 00:30:45.453 ************************************ 00:30:45.453 16:00:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 false 00:30:45.453 16:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:30:45.453 16:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:30:45.453 16:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:30:45.453 16:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:30:45.453 16:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:30:45.453 16:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:45.453 16:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:30:45.453 16:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:45.453 16:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:45.453 16:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:30:45.453 16:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:45.453 16:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:45.453 16:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:30:45.453 16:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:45.453 16:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:45.453 16:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:30:45.453 16:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:30:45.453 16:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:30:45.453 16:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:30:45.453 16:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:30:45.453 16:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:30:45.453 16:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:30:45.453 16:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:30:45.453 16:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:30:45.453 16:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:30:45.453 16:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:30:45.453 16:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=77418 00:30:45.453 Process raid pid: 77418 00:30:45.453 16:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77418' 00:30:45.453 16:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 77418 00:30:45.453 16:00:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 77418 ']' 00:30:45.453 16:00:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:45.453 16:00:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:45.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:45.454 16:00:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:45.454 16:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:30:45.454 16:00:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:45.454 16:00:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:45.712 [2024-11-05 16:00:17.887593] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:30:45.712 [2024-11-05 16:00:17.887708] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:45.712 [2024-11-05 16:00:18.044442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.712 [2024-11-05 16:00:18.123656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:45.970 [2024-11-05 16:00:18.231012] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:45.970 [2024-11-05 16:00:18.231041] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:46.536 16:00:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:46.536 16:00:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:30:46.536 16:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:30:46.536 16:00:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.536 16:00:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:46.536 [2024-11-05 16:00:18.719184] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:46.536 [2024-11-05 16:00:18.719220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:46.536 [2024-11-05 16:00:18.719228] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:46.536 [2024-11-05 16:00:18.719236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:46.536 [2024-11-05 16:00:18.719244] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:46.536 [2024-11-05 16:00:18.719252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:46.536 16:00:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.536 16:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:46.536 16:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:46.536 16:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:46.536 16:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:46.536 16:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:46.536 16:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:46.536 16:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:46.536 16:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:46.536 16:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:46.536 16:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:46.536 16:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:46.536 16:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:46.536 16:00:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.536 16:00:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:46.536 16:00:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.536 16:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:46.536 "name": "Existed_Raid", 00:30:46.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:46.536 "strip_size_kb": 64, 00:30:46.536 "state": "configuring", 00:30:46.536 "raid_level": "raid5f", 00:30:46.536 "superblock": false, 00:30:46.536 "num_base_bdevs": 3, 00:30:46.536 "num_base_bdevs_discovered": 0, 00:30:46.536 "num_base_bdevs_operational": 3, 00:30:46.536 "base_bdevs_list": [ 00:30:46.536 { 00:30:46.536 "name": "BaseBdev1", 00:30:46.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:46.536 "is_configured": false, 00:30:46.536 "data_offset": 0, 00:30:46.536 "data_size": 0 00:30:46.536 }, 00:30:46.536 { 00:30:46.536 "name": "BaseBdev2", 00:30:46.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:46.536 "is_configured": false, 00:30:46.536 "data_offset": 0, 00:30:46.536 "data_size": 0 00:30:46.536 }, 00:30:46.536 { 00:30:46.536 "name": "BaseBdev3", 00:30:46.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:46.536 "is_configured": false, 00:30:46.536 "data_offset": 0, 00:30:46.536 "data_size": 0 00:30:46.536 } 00:30:46.536 ] 00:30:46.536 }' 00:30:46.536 16:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:46.536 16:00:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:46.797 [2024-11-05 16:00:19.047217] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:46.797 [2024-11-05 16:00:19.047247] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:46.797 [2024-11-05 16:00:19.055215] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:46.797 [2024-11-05 16:00:19.055248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:46.797 [2024-11-05 16:00:19.055255] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:46.797 [2024-11-05 16:00:19.055261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:46.797 [2024-11-05 16:00:19.055266] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:46.797 [2024-11-05 16:00:19.055273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:46.797 [2024-11-05 16:00:19.082604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:46.797 BaseBdev1 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:46.797 [ 00:30:46.797 { 00:30:46.797 "name": "BaseBdev1", 00:30:46.797 "aliases": [ 00:30:46.797 "8215a6fa-eb23-4415-8637-38cabcd322ab" 00:30:46.797 ], 00:30:46.797 "product_name": "Malloc disk", 00:30:46.797 "block_size": 512, 00:30:46.797 "num_blocks": 65536, 00:30:46.797 "uuid": "8215a6fa-eb23-4415-8637-38cabcd322ab", 00:30:46.797 "assigned_rate_limits": { 00:30:46.797 "rw_ios_per_sec": 0, 00:30:46.797 "rw_mbytes_per_sec": 0, 00:30:46.797 "r_mbytes_per_sec": 0, 00:30:46.797 "w_mbytes_per_sec": 0 00:30:46.797 }, 00:30:46.797 "claimed": true, 00:30:46.797 "claim_type": "exclusive_write", 00:30:46.797 "zoned": false, 00:30:46.797 "supported_io_types": { 00:30:46.797 "read": true, 00:30:46.797 "write": true, 00:30:46.797 "unmap": true, 00:30:46.797 "flush": true, 00:30:46.797 "reset": true, 00:30:46.797 "nvme_admin": false, 00:30:46.797 "nvme_io": false, 00:30:46.797 "nvme_io_md": false, 00:30:46.797 "write_zeroes": true, 00:30:46.797 "zcopy": true, 00:30:46.797 "get_zone_info": false, 00:30:46.797 "zone_management": false, 00:30:46.797 "zone_append": false, 00:30:46.797 "compare": false, 00:30:46.797 "compare_and_write": false, 00:30:46.797 "abort": true, 00:30:46.797 "seek_hole": false, 00:30:46.797 "seek_data": false, 00:30:46.797 "copy": true, 00:30:46.797 "nvme_iov_md": false 00:30:46.797 }, 00:30:46.797 "memory_domains": [ 00:30:46.797 { 00:30:46.797 "dma_device_id": "system", 00:30:46.797 "dma_device_type": 1 00:30:46.797 }, 00:30:46.797 { 00:30:46.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:46.797 "dma_device_type": 2 00:30:46.797 } 00:30:46.797 ], 00:30:46.797 "driver_specific": {} 00:30:46.797 } 00:30:46.797 ] 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:46.797 "name": "Existed_Raid", 00:30:46.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:46.797 "strip_size_kb": 64, 00:30:46.797 "state": "configuring", 00:30:46.797 "raid_level": "raid5f", 00:30:46.797 "superblock": false, 00:30:46.797 "num_base_bdevs": 3, 00:30:46.797 "num_base_bdevs_discovered": 1, 00:30:46.797 "num_base_bdevs_operational": 3, 00:30:46.797 "base_bdevs_list": [ 00:30:46.797 { 00:30:46.797 "name": "BaseBdev1", 00:30:46.797 "uuid": "8215a6fa-eb23-4415-8637-38cabcd322ab", 00:30:46.797 "is_configured": true, 00:30:46.797 "data_offset": 0, 00:30:46.797 "data_size": 65536 00:30:46.797 }, 00:30:46.797 { 00:30:46.797 "name": "BaseBdev2", 00:30:46.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:46.797 "is_configured": false, 00:30:46.797 "data_offset": 0, 00:30:46.797 "data_size": 0 00:30:46.797 }, 00:30:46.797 { 00:30:46.797 "name": "BaseBdev3", 00:30:46.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:46.797 "is_configured": false, 00:30:46.797 "data_offset": 0, 00:30:46.797 "data_size": 0 00:30:46.797 } 00:30:46.797 ] 00:30:46.797 }' 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:46.797 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:47.062 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:47.062 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.062 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:47.062 [2024-11-05 16:00:19.402674] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:47.063 [2024-11-05 16:00:19.402707] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:30:47.063 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.063 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:30:47.063 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.063 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:47.063 [2024-11-05 16:00:19.410719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:47.063 [2024-11-05 16:00:19.412155] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:47.063 [2024-11-05 16:00:19.412184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:47.063 [2024-11-05 16:00:19.412191] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:47.063 [2024-11-05 16:00:19.412198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:47.063 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.063 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:30:47.063 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:47.063 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:47.063 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:47.063 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:47.063 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:47.063 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:47.063 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:47.063 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:47.063 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:47.063 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:47.063 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:47.063 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:47.063 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.063 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:47.063 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:47.063 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.063 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:47.063 "name": "Existed_Raid", 00:30:47.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:47.063 "strip_size_kb": 64, 00:30:47.063 "state": "configuring", 00:30:47.063 "raid_level": "raid5f", 00:30:47.063 "superblock": false, 00:30:47.063 "num_base_bdevs": 3, 00:30:47.063 "num_base_bdevs_discovered": 1, 00:30:47.063 "num_base_bdevs_operational": 3, 00:30:47.063 "base_bdevs_list": [ 00:30:47.063 { 00:30:47.063 "name": "BaseBdev1", 00:30:47.063 "uuid": "8215a6fa-eb23-4415-8637-38cabcd322ab", 00:30:47.063 "is_configured": true, 00:30:47.063 "data_offset": 0, 00:30:47.063 "data_size": 65536 00:30:47.063 }, 00:30:47.063 { 00:30:47.063 "name": "BaseBdev2", 00:30:47.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:47.063 "is_configured": false, 00:30:47.063 "data_offset": 0, 00:30:47.063 "data_size": 0 00:30:47.063 }, 00:30:47.063 { 00:30:47.063 "name": "BaseBdev3", 00:30:47.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:47.063 "is_configured": false, 00:30:47.063 "data_offset": 0, 00:30:47.063 "data_size": 0 00:30:47.063 } 00:30:47.063 ] 00:30:47.063 }' 00:30:47.063 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:47.063 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:47.320 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:30:47.320 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.320 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:47.578 [2024-11-05 16:00:19.745022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:47.578 BaseBdev2 00:30:47.578 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.578 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:30:47.578 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:30:47.578 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:30:47.578 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:30:47.578 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:30:47.578 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:30:47.578 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:30:47.578 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.578 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:47.578 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.578 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:47.578 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.578 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:47.578 [ 00:30:47.578 { 00:30:47.578 "name": "BaseBdev2", 00:30:47.578 "aliases": [ 00:30:47.578 "5c519692-3582-4d62-b22a-70ed6a5f360a" 00:30:47.578 ], 00:30:47.578 "product_name": "Malloc disk", 00:30:47.578 "block_size": 512, 00:30:47.578 "num_blocks": 65536, 00:30:47.578 "uuid": "5c519692-3582-4d62-b22a-70ed6a5f360a", 00:30:47.578 "assigned_rate_limits": { 00:30:47.578 "rw_ios_per_sec": 0, 00:30:47.578 "rw_mbytes_per_sec": 0, 00:30:47.578 "r_mbytes_per_sec": 0, 00:30:47.578 "w_mbytes_per_sec": 0 00:30:47.578 }, 00:30:47.578 "claimed": true, 00:30:47.578 "claim_type": "exclusive_write", 00:30:47.578 "zoned": false, 00:30:47.578 "supported_io_types": { 00:30:47.578 "read": true, 00:30:47.578 "write": true, 00:30:47.578 "unmap": true, 00:30:47.579 "flush": true, 00:30:47.579 "reset": true, 00:30:47.579 "nvme_admin": false, 00:30:47.579 "nvme_io": false, 00:30:47.579 "nvme_io_md": false, 00:30:47.579 "write_zeroes": true, 00:30:47.579 "zcopy": true, 00:30:47.579 "get_zone_info": false, 00:30:47.579 "zone_management": false, 00:30:47.579 "zone_append": false, 00:30:47.579 "compare": false, 00:30:47.579 "compare_and_write": false, 00:30:47.579 "abort": true, 00:30:47.579 "seek_hole": false, 00:30:47.579 "seek_data": false, 00:30:47.579 "copy": true, 00:30:47.579 "nvme_iov_md": false 00:30:47.579 }, 00:30:47.579 "memory_domains": [ 00:30:47.579 { 00:30:47.579 "dma_device_id": "system", 00:30:47.579 "dma_device_type": 1 00:30:47.579 }, 00:30:47.579 { 00:30:47.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:47.579 "dma_device_type": 2 00:30:47.579 } 00:30:47.579 ], 00:30:47.579 "driver_specific": {} 00:30:47.579 } 00:30:47.579 ] 00:30:47.579 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.579 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:30:47.579 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:47.579 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:47.579 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:47.579 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:47.579 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:47.579 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:47.579 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:47.579 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:47.579 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:47.579 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:47.579 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:47.579 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:47.579 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:47.579 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:47.579 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.579 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:47.579 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.579 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:47.579 "name": "Existed_Raid", 00:30:47.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:47.579 "strip_size_kb": 64, 00:30:47.579 "state": "configuring", 00:30:47.579 "raid_level": "raid5f", 00:30:47.579 "superblock": false, 00:30:47.579 "num_base_bdevs": 3, 00:30:47.579 "num_base_bdevs_discovered": 2, 00:30:47.579 "num_base_bdevs_operational": 3, 00:30:47.579 "base_bdevs_list": [ 00:30:47.579 { 00:30:47.579 "name": "BaseBdev1", 00:30:47.579 "uuid": "8215a6fa-eb23-4415-8637-38cabcd322ab", 00:30:47.579 "is_configured": true, 00:30:47.579 "data_offset": 0, 00:30:47.579 "data_size": 65536 00:30:47.579 }, 00:30:47.579 { 00:30:47.579 "name": "BaseBdev2", 00:30:47.579 "uuid": "5c519692-3582-4d62-b22a-70ed6a5f360a", 00:30:47.579 "is_configured": true, 00:30:47.579 "data_offset": 0, 00:30:47.579 "data_size": 65536 00:30:47.579 }, 00:30:47.579 { 00:30:47.579 "name": "BaseBdev3", 00:30:47.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:47.579 "is_configured": false, 00:30:47.579 "data_offset": 0, 00:30:47.579 "data_size": 0 00:30:47.579 } 00:30:47.579 ] 00:30:47.579 }' 00:30:47.579 16:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:47.579 16:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:47.837 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:30:47.837 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.837 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:47.837 [2024-11-05 16:00:20.131234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:47.837 [2024-11-05 16:00:20.131276] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:30:47.837 [2024-11-05 16:00:20.131285] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:30:47.837 [2024-11-05 16:00:20.131494] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:30:47.837 [2024-11-05 16:00:20.134521] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:30:47.837 [2024-11-05 16:00:20.134540] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:30:47.838 [2024-11-05 16:00:20.134742] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:47.838 BaseBdev3 00:30:47.838 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.838 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:30:47.838 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:30:47.838 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:30:47.838 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:30:47.838 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:30:47.838 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:30:47.838 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:30:47.838 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.838 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:47.838 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.838 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:47.838 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.838 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:47.838 [ 00:30:47.838 { 00:30:47.838 "name": "BaseBdev3", 00:30:47.838 "aliases": [ 00:30:47.838 "00c77454-9bd7-479b-8411-9f5fc462222e" 00:30:47.838 ], 00:30:47.838 "product_name": "Malloc disk", 00:30:47.838 "block_size": 512, 00:30:47.838 "num_blocks": 65536, 00:30:47.838 "uuid": "00c77454-9bd7-479b-8411-9f5fc462222e", 00:30:47.838 "assigned_rate_limits": { 00:30:47.838 "rw_ios_per_sec": 0, 00:30:47.838 "rw_mbytes_per_sec": 0, 00:30:47.838 "r_mbytes_per_sec": 0, 00:30:47.838 "w_mbytes_per_sec": 0 00:30:47.838 }, 00:30:47.838 "claimed": true, 00:30:47.838 "claim_type": "exclusive_write", 00:30:47.838 "zoned": false, 00:30:47.838 "supported_io_types": { 00:30:47.838 "read": true, 00:30:47.838 "write": true, 00:30:47.838 "unmap": true, 00:30:47.838 "flush": true, 00:30:47.838 "reset": true, 00:30:47.838 "nvme_admin": false, 00:30:47.838 "nvme_io": false, 00:30:47.838 "nvme_io_md": false, 00:30:47.838 "write_zeroes": true, 00:30:47.838 "zcopy": true, 00:30:47.838 "get_zone_info": false, 00:30:47.838 "zone_management": false, 00:30:47.838 "zone_append": false, 00:30:47.838 "compare": false, 00:30:47.838 "compare_and_write": false, 00:30:47.838 "abort": true, 00:30:47.838 "seek_hole": false, 00:30:47.838 "seek_data": false, 00:30:47.838 "copy": true, 00:30:47.838 "nvme_iov_md": false 00:30:47.838 }, 00:30:47.838 "memory_domains": [ 00:30:47.838 { 00:30:47.838 "dma_device_id": "system", 00:30:47.838 "dma_device_type": 1 00:30:47.838 }, 00:30:47.838 { 00:30:47.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:47.838 "dma_device_type": 2 00:30:47.838 } 00:30:47.838 ], 00:30:47.838 "driver_specific": {} 00:30:47.838 } 00:30:47.838 ] 00:30:47.838 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.838 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:30:47.838 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:47.838 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:47.838 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:30:47.838 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:47.838 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:47.838 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:47.838 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:47.838 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:47.838 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:47.838 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:47.838 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:47.838 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:47.838 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:47.838 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.838 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:47.838 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:47.838 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.838 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:47.838 "name": "Existed_Raid", 00:30:47.838 "uuid": "dc073db6-dca2-4616-8e2a-7a95538f4fa8", 00:30:47.838 "strip_size_kb": 64, 00:30:47.838 "state": "online", 00:30:47.838 "raid_level": "raid5f", 00:30:47.838 "superblock": false, 00:30:47.838 "num_base_bdevs": 3, 00:30:47.838 "num_base_bdevs_discovered": 3, 00:30:47.838 "num_base_bdevs_operational": 3, 00:30:47.838 "base_bdevs_list": [ 00:30:47.838 { 00:30:47.838 "name": "BaseBdev1", 00:30:47.838 "uuid": "8215a6fa-eb23-4415-8637-38cabcd322ab", 00:30:47.838 "is_configured": true, 00:30:47.838 "data_offset": 0, 00:30:47.838 "data_size": 65536 00:30:47.838 }, 00:30:47.838 { 00:30:47.838 "name": "BaseBdev2", 00:30:47.838 "uuid": "5c519692-3582-4d62-b22a-70ed6a5f360a", 00:30:47.838 "is_configured": true, 00:30:47.838 "data_offset": 0, 00:30:47.838 "data_size": 65536 00:30:47.838 }, 00:30:47.838 { 00:30:47.838 "name": "BaseBdev3", 00:30:47.838 "uuid": "00c77454-9bd7-479b-8411-9f5fc462222e", 00:30:47.838 "is_configured": true, 00:30:47.838 "data_offset": 0, 00:30:47.838 "data_size": 65536 00:30:47.838 } 00:30:47.838 ] 00:30:47.838 }' 00:30:47.838 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:47.838 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:48.097 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:30:48.097 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:30:48.097 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:48.097 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:48.097 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:30:48.097 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:48.097 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:30:48.097 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.097 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:48.097 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:48.097 [2024-11-05 16:00:20.486165] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:48.097 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.097 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:48.097 "name": "Existed_Raid", 00:30:48.097 "aliases": [ 00:30:48.097 "dc073db6-dca2-4616-8e2a-7a95538f4fa8" 00:30:48.097 ], 00:30:48.097 "product_name": "Raid Volume", 00:30:48.097 "block_size": 512, 00:30:48.097 "num_blocks": 131072, 00:30:48.097 "uuid": "dc073db6-dca2-4616-8e2a-7a95538f4fa8", 00:30:48.097 "assigned_rate_limits": { 00:30:48.097 "rw_ios_per_sec": 0, 00:30:48.097 "rw_mbytes_per_sec": 0, 00:30:48.097 "r_mbytes_per_sec": 0, 00:30:48.097 "w_mbytes_per_sec": 0 00:30:48.097 }, 00:30:48.097 "claimed": false, 00:30:48.097 "zoned": false, 00:30:48.097 "supported_io_types": { 00:30:48.097 "read": true, 00:30:48.097 "write": true, 00:30:48.097 "unmap": false, 00:30:48.097 "flush": false, 00:30:48.097 "reset": true, 00:30:48.097 "nvme_admin": false, 00:30:48.097 "nvme_io": false, 00:30:48.097 "nvme_io_md": false, 00:30:48.097 "write_zeroes": true, 00:30:48.097 "zcopy": false, 00:30:48.097 "get_zone_info": false, 00:30:48.097 "zone_management": false, 00:30:48.097 "zone_append": false, 00:30:48.097 "compare": false, 00:30:48.097 "compare_and_write": false, 00:30:48.097 "abort": false, 00:30:48.097 "seek_hole": false, 00:30:48.097 "seek_data": false, 00:30:48.097 "copy": false, 00:30:48.097 "nvme_iov_md": false 00:30:48.097 }, 00:30:48.097 "driver_specific": { 00:30:48.097 "raid": { 00:30:48.097 "uuid": "dc073db6-dca2-4616-8e2a-7a95538f4fa8", 00:30:48.097 "strip_size_kb": 64, 00:30:48.097 "state": "online", 00:30:48.097 "raid_level": "raid5f", 00:30:48.097 "superblock": false, 00:30:48.097 "num_base_bdevs": 3, 00:30:48.097 "num_base_bdevs_discovered": 3, 00:30:48.097 "num_base_bdevs_operational": 3, 00:30:48.097 "base_bdevs_list": [ 00:30:48.097 { 00:30:48.097 "name": "BaseBdev1", 00:30:48.097 "uuid": "8215a6fa-eb23-4415-8637-38cabcd322ab", 00:30:48.097 "is_configured": true, 00:30:48.097 "data_offset": 0, 00:30:48.097 "data_size": 65536 00:30:48.097 }, 00:30:48.097 { 00:30:48.097 "name": "BaseBdev2", 00:30:48.097 "uuid": "5c519692-3582-4d62-b22a-70ed6a5f360a", 00:30:48.097 "is_configured": true, 00:30:48.097 "data_offset": 0, 00:30:48.097 "data_size": 65536 00:30:48.097 }, 00:30:48.097 { 00:30:48.097 "name": "BaseBdev3", 00:30:48.097 "uuid": "00c77454-9bd7-479b-8411-9f5fc462222e", 00:30:48.097 "is_configured": true, 00:30:48.097 "data_offset": 0, 00:30:48.097 "data_size": 65536 00:30:48.097 } 00:30:48.097 ] 00:30:48.097 } 00:30:48.097 } 00:30:48.097 }' 00:30:48.097 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:48.355 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:30:48.355 BaseBdev2 00:30:48.355 BaseBdev3' 00:30:48.355 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:48.355 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:48.355 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:48.355 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:30:48.355 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.355 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:48.355 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:48.355 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.355 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:48.355 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:48.355 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:48.355 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:48.355 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:30:48.355 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.355 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:48.355 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.356 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:48.356 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:48.356 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:48.356 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:48.356 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:30:48.356 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.356 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:48.356 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.356 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:48.356 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:48.356 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:30:48.356 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.356 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:48.356 [2024-11-05 16:00:20.662101] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:48.356 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.356 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:30:48.356 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:30:48.356 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:30:48.356 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:30:48.356 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:30:48.356 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:30:48.356 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:48.356 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:48.356 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:48.356 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:48.356 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:48.356 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:48.356 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:48.356 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:48.356 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:48.356 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:48.356 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.356 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:48.356 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:48.356 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.356 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:48.356 "name": "Existed_Raid", 00:30:48.356 "uuid": "dc073db6-dca2-4616-8e2a-7a95538f4fa8", 00:30:48.356 "strip_size_kb": 64, 00:30:48.356 "state": "online", 00:30:48.356 "raid_level": "raid5f", 00:30:48.356 "superblock": false, 00:30:48.356 "num_base_bdevs": 3, 00:30:48.356 "num_base_bdevs_discovered": 2, 00:30:48.356 "num_base_bdevs_operational": 2, 00:30:48.356 "base_bdevs_list": [ 00:30:48.356 { 00:30:48.356 "name": null, 00:30:48.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:48.356 "is_configured": false, 00:30:48.356 "data_offset": 0, 00:30:48.356 "data_size": 65536 00:30:48.356 }, 00:30:48.356 { 00:30:48.356 "name": "BaseBdev2", 00:30:48.356 "uuid": "5c519692-3582-4d62-b22a-70ed6a5f360a", 00:30:48.356 "is_configured": true, 00:30:48.356 "data_offset": 0, 00:30:48.356 "data_size": 65536 00:30:48.356 }, 00:30:48.356 { 00:30:48.356 "name": "BaseBdev3", 00:30:48.356 "uuid": "00c77454-9bd7-479b-8411-9f5fc462222e", 00:30:48.356 "is_configured": true, 00:30:48.356 "data_offset": 0, 00:30:48.356 "data_size": 65536 00:30:48.356 } 00:30:48.356 ] 00:30:48.356 }' 00:30:48.356 16:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:48.356 16:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:48.922 [2024-11-05 16:00:21.084070] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:48.922 [2024-11-05 16:00:21.084148] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:48.922 [2024-11-05 16:00:21.128669] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:48.922 [2024-11-05 16:00:21.168721] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:48.922 [2024-11-05 16:00:21.168874] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:48.922 BaseBdev2 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:48.922 [ 00:30:48.922 { 00:30:48.922 "name": "BaseBdev2", 00:30:48.922 "aliases": [ 00:30:48.922 "84139abe-d032-4bac-ba40-8333ed3b12ed" 00:30:48.922 ], 00:30:48.922 "product_name": "Malloc disk", 00:30:48.922 "block_size": 512, 00:30:48.922 "num_blocks": 65536, 00:30:48.922 "uuid": "84139abe-d032-4bac-ba40-8333ed3b12ed", 00:30:48.922 "assigned_rate_limits": { 00:30:48.922 "rw_ios_per_sec": 0, 00:30:48.922 "rw_mbytes_per_sec": 0, 00:30:48.922 "r_mbytes_per_sec": 0, 00:30:48.922 "w_mbytes_per_sec": 0 00:30:48.922 }, 00:30:48.922 "claimed": false, 00:30:48.922 "zoned": false, 00:30:48.922 "supported_io_types": { 00:30:48.922 "read": true, 00:30:48.922 "write": true, 00:30:48.922 "unmap": true, 00:30:48.922 "flush": true, 00:30:48.922 "reset": true, 00:30:48.922 "nvme_admin": false, 00:30:48.922 "nvme_io": false, 00:30:48.922 "nvme_io_md": false, 00:30:48.922 "write_zeroes": true, 00:30:48.922 "zcopy": true, 00:30:48.922 "get_zone_info": false, 00:30:48.922 "zone_management": false, 00:30:48.922 "zone_append": false, 00:30:48.922 "compare": false, 00:30:48.922 "compare_and_write": false, 00:30:48.922 "abort": true, 00:30:48.922 "seek_hole": false, 00:30:48.922 "seek_data": false, 00:30:48.922 "copy": true, 00:30:48.922 "nvme_iov_md": false 00:30:48.922 }, 00:30:48.922 "memory_domains": [ 00:30:48.922 { 00:30:48.922 "dma_device_id": "system", 00:30:48.922 "dma_device_type": 1 00:30:48.922 }, 00:30:48.922 { 00:30:48.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:48.922 "dma_device_type": 2 00:30:48.922 } 00:30:48.922 ], 00:30:48.922 "driver_specific": {} 00:30:48.922 } 00:30:48.922 ] 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:48.922 BaseBdev3 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:30:48.922 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:30:48.923 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:30:48.923 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.923 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:48.923 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.923 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:48.923 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.923 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:49.180 [ 00:30:49.180 { 00:30:49.180 "name": "BaseBdev3", 00:30:49.180 "aliases": [ 00:30:49.180 "c290ea4e-b36d-4ced-b302-865a5e2329da" 00:30:49.180 ], 00:30:49.180 "product_name": "Malloc disk", 00:30:49.180 "block_size": 512, 00:30:49.180 "num_blocks": 65536, 00:30:49.180 "uuid": "c290ea4e-b36d-4ced-b302-865a5e2329da", 00:30:49.180 "assigned_rate_limits": { 00:30:49.180 "rw_ios_per_sec": 0, 00:30:49.180 "rw_mbytes_per_sec": 0, 00:30:49.180 "r_mbytes_per_sec": 0, 00:30:49.180 "w_mbytes_per_sec": 0 00:30:49.180 }, 00:30:49.180 "claimed": false, 00:30:49.180 "zoned": false, 00:30:49.180 "supported_io_types": { 00:30:49.180 "read": true, 00:30:49.180 "write": true, 00:30:49.180 "unmap": true, 00:30:49.180 "flush": true, 00:30:49.180 "reset": true, 00:30:49.180 "nvme_admin": false, 00:30:49.180 "nvme_io": false, 00:30:49.180 "nvme_io_md": false, 00:30:49.180 "write_zeroes": true, 00:30:49.180 "zcopy": true, 00:30:49.180 "get_zone_info": false, 00:30:49.180 "zone_management": false, 00:30:49.180 "zone_append": false, 00:30:49.180 "compare": false, 00:30:49.180 "compare_and_write": false, 00:30:49.180 "abort": true, 00:30:49.180 "seek_hole": false, 00:30:49.180 "seek_data": false, 00:30:49.180 "copy": true, 00:30:49.180 "nvme_iov_md": false 00:30:49.180 }, 00:30:49.180 "memory_domains": [ 00:30:49.180 { 00:30:49.180 "dma_device_id": "system", 00:30:49.180 "dma_device_type": 1 00:30:49.180 }, 00:30:49.180 { 00:30:49.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:49.180 "dma_device_type": 2 00:30:49.180 } 00:30:49.180 ], 00:30:49.180 "driver_specific": {} 00:30:49.180 } 00:30:49.180 ] 00:30:49.180 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.180 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:30:49.180 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:49.180 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:49.180 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:30:49.180 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.180 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:49.180 [2024-11-05 16:00:21.349937] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:49.180 [2024-11-05 16:00:21.350057] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:49.180 [2024-11-05 16:00:21.350116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:49.180 [2024-11-05 16:00:21.351612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:49.181 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.181 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:49.181 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:49.181 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:49.181 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:49.181 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:49.181 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:49.181 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:49.181 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:49.181 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:49.181 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:49.181 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:49.181 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:49.181 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.181 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:49.181 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.181 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:49.181 "name": "Existed_Raid", 00:30:49.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:49.181 "strip_size_kb": 64, 00:30:49.181 "state": "configuring", 00:30:49.181 "raid_level": "raid5f", 00:30:49.181 "superblock": false, 00:30:49.181 "num_base_bdevs": 3, 00:30:49.181 "num_base_bdevs_discovered": 2, 00:30:49.181 "num_base_bdevs_operational": 3, 00:30:49.181 "base_bdevs_list": [ 00:30:49.181 { 00:30:49.181 "name": "BaseBdev1", 00:30:49.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:49.181 "is_configured": false, 00:30:49.181 "data_offset": 0, 00:30:49.181 "data_size": 0 00:30:49.181 }, 00:30:49.181 { 00:30:49.181 "name": "BaseBdev2", 00:30:49.181 "uuid": "84139abe-d032-4bac-ba40-8333ed3b12ed", 00:30:49.181 "is_configured": true, 00:30:49.181 "data_offset": 0, 00:30:49.181 "data_size": 65536 00:30:49.181 }, 00:30:49.181 { 00:30:49.181 "name": "BaseBdev3", 00:30:49.181 "uuid": "c290ea4e-b36d-4ced-b302-865a5e2329da", 00:30:49.181 "is_configured": true, 00:30:49.181 "data_offset": 0, 00:30:49.181 "data_size": 65536 00:30:49.181 } 00:30:49.181 ] 00:30:49.181 }' 00:30:49.181 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:49.181 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:49.438 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:30:49.438 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.438 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:49.438 [2024-11-05 16:00:21.654001] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:49.438 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.438 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:49.438 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:49.438 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:49.438 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:49.438 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:49.438 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:49.438 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:49.438 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:49.438 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:49.438 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:49.438 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:49.438 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:49.438 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.438 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:49.438 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.439 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:49.439 "name": "Existed_Raid", 00:30:49.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:49.439 "strip_size_kb": 64, 00:30:49.439 "state": "configuring", 00:30:49.439 "raid_level": "raid5f", 00:30:49.439 "superblock": false, 00:30:49.439 "num_base_bdevs": 3, 00:30:49.439 "num_base_bdevs_discovered": 1, 00:30:49.439 "num_base_bdevs_operational": 3, 00:30:49.439 "base_bdevs_list": [ 00:30:49.439 { 00:30:49.439 "name": "BaseBdev1", 00:30:49.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:49.439 "is_configured": false, 00:30:49.439 "data_offset": 0, 00:30:49.439 "data_size": 0 00:30:49.439 }, 00:30:49.439 { 00:30:49.439 "name": null, 00:30:49.439 "uuid": "84139abe-d032-4bac-ba40-8333ed3b12ed", 00:30:49.439 "is_configured": false, 00:30:49.439 "data_offset": 0, 00:30:49.439 "data_size": 65536 00:30:49.439 }, 00:30:49.439 { 00:30:49.439 "name": "BaseBdev3", 00:30:49.439 "uuid": "c290ea4e-b36d-4ced-b302-865a5e2329da", 00:30:49.439 "is_configured": true, 00:30:49.439 "data_offset": 0, 00:30:49.439 "data_size": 65536 00:30:49.439 } 00:30:49.439 ] 00:30:49.439 }' 00:30:49.439 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:49.439 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:49.697 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:49.697 16:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:49.697 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.697 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:49.697 16:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.697 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:30:49.697 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:30:49.697 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.697 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:49.697 [2024-11-05 16:00:22.028271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:49.697 BaseBdev1 00:30:49.697 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.697 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:30:49.697 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:30:49.697 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:30:49.697 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:30:49.697 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:30:49.697 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:30:49.697 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:30:49.697 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.697 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:49.697 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.697 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:49.697 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.697 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:49.697 [ 00:30:49.697 { 00:30:49.697 "name": "BaseBdev1", 00:30:49.697 "aliases": [ 00:30:49.697 "91b0d409-1e03-4326-8e14-1c6d258c92f3" 00:30:49.697 ], 00:30:49.697 "product_name": "Malloc disk", 00:30:49.697 "block_size": 512, 00:30:49.697 "num_blocks": 65536, 00:30:49.697 "uuid": "91b0d409-1e03-4326-8e14-1c6d258c92f3", 00:30:49.697 "assigned_rate_limits": { 00:30:49.697 "rw_ios_per_sec": 0, 00:30:49.697 "rw_mbytes_per_sec": 0, 00:30:49.697 "r_mbytes_per_sec": 0, 00:30:49.697 "w_mbytes_per_sec": 0 00:30:49.697 }, 00:30:49.697 "claimed": true, 00:30:49.697 "claim_type": "exclusive_write", 00:30:49.697 "zoned": false, 00:30:49.697 "supported_io_types": { 00:30:49.697 "read": true, 00:30:49.697 "write": true, 00:30:49.697 "unmap": true, 00:30:49.697 "flush": true, 00:30:49.697 "reset": true, 00:30:49.697 "nvme_admin": false, 00:30:49.697 "nvme_io": false, 00:30:49.697 "nvme_io_md": false, 00:30:49.697 "write_zeroes": true, 00:30:49.697 "zcopy": true, 00:30:49.697 "get_zone_info": false, 00:30:49.697 "zone_management": false, 00:30:49.697 "zone_append": false, 00:30:49.697 "compare": false, 00:30:49.697 "compare_and_write": false, 00:30:49.697 "abort": true, 00:30:49.697 "seek_hole": false, 00:30:49.697 "seek_data": false, 00:30:49.697 "copy": true, 00:30:49.697 "nvme_iov_md": false 00:30:49.697 }, 00:30:49.697 "memory_domains": [ 00:30:49.697 { 00:30:49.697 "dma_device_id": "system", 00:30:49.697 "dma_device_type": 1 00:30:49.697 }, 00:30:49.697 { 00:30:49.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:49.697 "dma_device_type": 2 00:30:49.697 } 00:30:49.697 ], 00:30:49.697 "driver_specific": {} 00:30:49.697 } 00:30:49.697 ] 00:30:49.697 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.697 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:30:49.697 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:49.697 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:49.697 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:49.697 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:49.697 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:49.697 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:49.697 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:49.697 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:49.697 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:49.697 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:49.697 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:49.697 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.697 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:49.697 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:49.697 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.697 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:49.697 "name": "Existed_Raid", 00:30:49.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:49.697 "strip_size_kb": 64, 00:30:49.697 "state": "configuring", 00:30:49.697 "raid_level": "raid5f", 00:30:49.697 "superblock": false, 00:30:49.697 "num_base_bdevs": 3, 00:30:49.697 "num_base_bdevs_discovered": 2, 00:30:49.697 "num_base_bdevs_operational": 3, 00:30:49.697 "base_bdevs_list": [ 00:30:49.697 { 00:30:49.697 "name": "BaseBdev1", 00:30:49.697 "uuid": "91b0d409-1e03-4326-8e14-1c6d258c92f3", 00:30:49.697 "is_configured": true, 00:30:49.697 "data_offset": 0, 00:30:49.697 "data_size": 65536 00:30:49.697 }, 00:30:49.697 { 00:30:49.697 "name": null, 00:30:49.697 "uuid": "84139abe-d032-4bac-ba40-8333ed3b12ed", 00:30:49.697 "is_configured": false, 00:30:49.697 "data_offset": 0, 00:30:49.697 "data_size": 65536 00:30:49.697 }, 00:30:49.697 { 00:30:49.697 "name": "BaseBdev3", 00:30:49.697 "uuid": "c290ea4e-b36d-4ced-b302-865a5e2329da", 00:30:49.697 "is_configured": true, 00:30:49.697 "data_offset": 0, 00:30:49.697 "data_size": 65536 00:30:49.697 } 00:30:49.697 ] 00:30:49.697 }' 00:30:49.697 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:49.697 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:49.955 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:49.955 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:49.955 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.955 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:49.955 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.212 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:30:50.212 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:30:50.212 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.212 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.212 [2024-11-05 16:00:22.380357] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:50.212 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.212 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:50.212 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:50.212 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:50.212 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:50.212 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:50.212 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:50.212 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:50.212 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:50.212 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:50.212 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:50.212 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:50.212 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.212 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.212 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:50.212 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.212 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:50.212 "name": "Existed_Raid", 00:30:50.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:50.212 "strip_size_kb": 64, 00:30:50.212 "state": "configuring", 00:30:50.212 "raid_level": "raid5f", 00:30:50.212 "superblock": false, 00:30:50.212 "num_base_bdevs": 3, 00:30:50.212 "num_base_bdevs_discovered": 1, 00:30:50.212 "num_base_bdevs_operational": 3, 00:30:50.212 "base_bdevs_list": [ 00:30:50.212 { 00:30:50.212 "name": "BaseBdev1", 00:30:50.213 "uuid": "91b0d409-1e03-4326-8e14-1c6d258c92f3", 00:30:50.213 "is_configured": true, 00:30:50.213 "data_offset": 0, 00:30:50.213 "data_size": 65536 00:30:50.213 }, 00:30:50.213 { 00:30:50.213 "name": null, 00:30:50.213 "uuid": "84139abe-d032-4bac-ba40-8333ed3b12ed", 00:30:50.213 "is_configured": false, 00:30:50.213 "data_offset": 0, 00:30:50.213 "data_size": 65536 00:30:50.213 }, 00:30:50.213 { 00:30:50.213 "name": null, 00:30:50.213 "uuid": "c290ea4e-b36d-4ced-b302-865a5e2329da", 00:30:50.213 "is_configured": false, 00:30:50.213 "data_offset": 0, 00:30:50.213 "data_size": 65536 00:30:50.213 } 00:30:50.213 ] 00:30:50.213 }' 00:30:50.213 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:50.213 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.471 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:50.471 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:50.471 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.471 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.471 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.471 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:30:50.471 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:30:50.471 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.471 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.471 [2024-11-05 16:00:22.720441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:50.471 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.471 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:50.471 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:50.471 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:50.471 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:50.471 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:50.471 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:50.471 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:50.471 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:50.471 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:50.471 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:50.471 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:50.471 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:50.471 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.471 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.471 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.471 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:50.471 "name": "Existed_Raid", 00:30:50.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:50.471 "strip_size_kb": 64, 00:30:50.471 "state": "configuring", 00:30:50.471 "raid_level": "raid5f", 00:30:50.471 "superblock": false, 00:30:50.471 "num_base_bdevs": 3, 00:30:50.471 "num_base_bdevs_discovered": 2, 00:30:50.471 "num_base_bdevs_operational": 3, 00:30:50.471 "base_bdevs_list": [ 00:30:50.471 { 00:30:50.471 "name": "BaseBdev1", 00:30:50.471 "uuid": "91b0d409-1e03-4326-8e14-1c6d258c92f3", 00:30:50.471 "is_configured": true, 00:30:50.471 "data_offset": 0, 00:30:50.471 "data_size": 65536 00:30:50.471 }, 00:30:50.471 { 00:30:50.471 "name": null, 00:30:50.471 "uuid": "84139abe-d032-4bac-ba40-8333ed3b12ed", 00:30:50.471 "is_configured": false, 00:30:50.471 "data_offset": 0, 00:30:50.471 "data_size": 65536 00:30:50.471 }, 00:30:50.471 { 00:30:50.471 "name": "BaseBdev3", 00:30:50.471 "uuid": "c290ea4e-b36d-4ced-b302-865a5e2329da", 00:30:50.471 "is_configured": true, 00:30:50.471 "data_offset": 0, 00:30:50.471 "data_size": 65536 00:30:50.471 } 00:30:50.471 ] 00:30:50.471 }' 00:30:50.471 16:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:50.471 16:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.729 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:50.729 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.729 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:50.729 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.729 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.729 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:30:50.729 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:30:50.729 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.729 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.729 [2024-11-05 16:00:23.088511] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:50.729 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.729 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:50.729 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:50.729 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:50.729 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:50.729 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:50.729 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:50.729 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:50.729 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:50.729 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:50.729 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:50.729 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:50.729 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.729 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:50.730 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.987 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.987 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:50.987 "name": "Existed_Raid", 00:30:50.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:50.987 "strip_size_kb": 64, 00:30:50.987 "state": "configuring", 00:30:50.987 "raid_level": "raid5f", 00:30:50.987 "superblock": false, 00:30:50.987 "num_base_bdevs": 3, 00:30:50.987 "num_base_bdevs_discovered": 1, 00:30:50.987 "num_base_bdevs_operational": 3, 00:30:50.987 "base_bdevs_list": [ 00:30:50.987 { 00:30:50.987 "name": null, 00:30:50.987 "uuid": "91b0d409-1e03-4326-8e14-1c6d258c92f3", 00:30:50.987 "is_configured": false, 00:30:50.987 "data_offset": 0, 00:30:50.987 "data_size": 65536 00:30:50.987 }, 00:30:50.987 { 00:30:50.987 "name": null, 00:30:50.987 "uuid": "84139abe-d032-4bac-ba40-8333ed3b12ed", 00:30:50.987 "is_configured": false, 00:30:50.987 "data_offset": 0, 00:30:50.987 "data_size": 65536 00:30:50.987 }, 00:30:50.987 { 00:30:50.987 "name": "BaseBdev3", 00:30:50.987 "uuid": "c290ea4e-b36d-4ced-b302-865a5e2329da", 00:30:50.987 "is_configured": true, 00:30:50.987 "data_offset": 0, 00:30:50.987 "data_size": 65536 00:30:50.987 } 00:30:50.987 ] 00:30:50.987 }' 00:30:50.987 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:50.987 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:51.245 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:51.245 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:51.245 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.245 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:51.245 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.245 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:30:51.245 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:30:51.245 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.245 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:51.245 [2024-11-05 16:00:23.479222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:51.245 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.245 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:51.245 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:51.245 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:51.245 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:51.245 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:51.245 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:51.245 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:51.245 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:51.245 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:51.245 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:51.245 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:51.245 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:51.245 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.245 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:51.245 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.245 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:51.245 "name": "Existed_Raid", 00:30:51.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:51.245 "strip_size_kb": 64, 00:30:51.245 "state": "configuring", 00:30:51.245 "raid_level": "raid5f", 00:30:51.245 "superblock": false, 00:30:51.245 "num_base_bdevs": 3, 00:30:51.245 "num_base_bdevs_discovered": 2, 00:30:51.245 "num_base_bdevs_operational": 3, 00:30:51.245 "base_bdevs_list": [ 00:30:51.245 { 00:30:51.245 "name": null, 00:30:51.245 "uuid": "91b0d409-1e03-4326-8e14-1c6d258c92f3", 00:30:51.245 "is_configured": false, 00:30:51.245 "data_offset": 0, 00:30:51.245 "data_size": 65536 00:30:51.245 }, 00:30:51.245 { 00:30:51.245 "name": "BaseBdev2", 00:30:51.245 "uuid": "84139abe-d032-4bac-ba40-8333ed3b12ed", 00:30:51.245 "is_configured": true, 00:30:51.245 "data_offset": 0, 00:30:51.245 "data_size": 65536 00:30:51.245 }, 00:30:51.245 { 00:30:51.245 "name": "BaseBdev3", 00:30:51.245 "uuid": "c290ea4e-b36d-4ced-b302-865a5e2329da", 00:30:51.245 "is_configured": true, 00:30:51.245 "data_offset": 0, 00:30:51.245 "data_size": 65536 00:30:51.245 } 00:30:51.245 ] 00:30:51.245 }' 00:30:51.245 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:51.245 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:51.503 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:51.503 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:51.503 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.503 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:51.503 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.503 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:30:51.503 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:51.503 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:30:51.503 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.503 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:51.503 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.503 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 91b0d409-1e03-4326-8e14-1c6d258c92f3 00:30:51.503 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.503 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:51.503 [2024-11-05 16:00:23.889301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:30:51.503 [2024-11-05 16:00:23.889331] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:30:51.503 [2024-11-05 16:00:23.889338] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:30:51.503 [2024-11-05 16:00:23.889530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:30:51.503 [2024-11-05 16:00:23.892490] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:30:51.503 [2024-11-05 16:00:23.892574] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:30:51.503 [2024-11-05 16:00:23.892808] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:51.503 NewBaseBdev 00:30:51.503 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.503 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:30:51.503 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:30:51.503 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:30:51.503 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:30:51.503 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:30:51.503 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:30:51.503 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:30:51.503 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.503 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:51.503 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.503 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:30:51.503 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.503 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:51.503 [ 00:30:51.503 { 00:30:51.503 "name": "NewBaseBdev", 00:30:51.503 "aliases": [ 00:30:51.503 "91b0d409-1e03-4326-8e14-1c6d258c92f3" 00:30:51.503 ], 00:30:51.503 "product_name": "Malloc disk", 00:30:51.503 "block_size": 512, 00:30:51.503 "num_blocks": 65536, 00:30:51.503 "uuid": "91b0d409-1e03-4326-8e14-1c6d258c92f3", 00:30:51.503 "assigned_rate_limits": { 00:30:51.503 "rw_ios_per_sec": 0, 00:30:51.503 "rw_mbytes_per_sec": 0, 00:30:51.503 "r_mbytes_per_sec": 0, 00:30:51.503 "w_mbytes_per_sec": 0 00:30:51.503 }, 00:30:51.503 "claimed": true, 00:30:51.503 "claim_type": "exclusive_write", 00:30:51.503 "zoned": false, 00:30:51.503 "supported_io_types": { 00:30:51.503 "read": true, 00:30:51.503 "write": true, 00:30:51.503 "unmap": true, 00:30:51.503 "flush": true, 00:30:51.503 "reset": true, 00:30:51.503 "nvme_admin": false, 00:30:51.503 "nvme_io": false, 00:30:51.503 "nvme_io_md": false, 00:30:51.503 "write_zeroes": true, 00:30:51.503 "zcopy": true, 00:30:51.503 "get_zone_info": false, 00:30:51.503 "zone_management": false, 00:30:51.503 "zone_append": false, 00:30:51.503 "compare": false, 00:30:51.762 "compare_and_write": false, 00:30:51.762 "abort": true, 00:30:51.762 "seek_hole": false, 00:30:51.762 "seek_data": false, 00:30:51.762 "copy": true, 00:30:51.762 "nvme_iov_md": false 00:30:51.762 }, 00:30:51.762 "memory_domains": [ 00:30:51.762 { 00:30:51.762 "dma_device_id": "system", 00:30:51.762 "dma_device_type": 1 00:30:51.762 }, 00:30:51.762 { 00:30:51.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:51.762 "dma_device_type": 2 00:30:51.762 } 00:30:51.762 ], 00:30:51.762 "driver_specific": {} 00:30:51.762 } 00:30:51.762 ] 00:30:51.762 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.762 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:30:51.762 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:30:51.762 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:51.762 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:51.762 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:51.762 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:51.762 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:51.762 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:51.762 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:51.762 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:51.762 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:51.762 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:51.762 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:51.762 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.762 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:51.762 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.762 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:51.762 "name": "Existed_Raid", 00:30:51.762 "uuid": "7f42811d-045e-422f-997a-7c0837a2b7c8", 00:30:51.762 "strip_size_kb": 64, 00:30:51.762 "state": "online", 00:30:51.762 "raid_level": "raid5f", 00:30:51.762 "superblock": false, 00:30:51.762 "num_base_bdevs": 3, 00:30:51.762 "num_base_bdevs_discovered": 3, 00:30:51.762 "num_base_bdevs_operational": 3, 00:30:51.762 "base_bdevs_list": [ 00:30:51.762 { 00:30:51.762 "name": "NewBaseBdev", 00:30:51.762 "uuid": "91b0d409-1e03-4326-8e14-1c6d258c92f3", 00:30:51.762 "is_configured": true, 00:30:51.762 "data_offset": 0, 00:30:51.762 "data_size": 65536 00:30:51.762 }, 00:30:51.762 { 00:30:51.762 "name": "BaseBdev2", 00:30:51.762 "uuid": "84139abe-d032-4bac-ba40-8333ed3b12ed", 00:30:51.762 "is_configured": true, 00:30:51.762 "data_offset": 0, 00:30:51.762 "data_size": 65536 00:30:51.762 }, 00:30:51.762 { 00:30:51.762 "name": "BaseBdev3", 00:30:51.762 "uuid": "c290ea4e-b36d-4ced-b302-865a5e2329da", 00:30:51.762 "is_configured": true, 00:30:51.762 "data_offset": 0, 00:30:51.762 "data_size": 65536 00:30:51.762 } 00:30:51.762 ] 00:30:51.762 }' 00:30:51.762 16:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:51.762 16:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:52.020 16:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:30:52.020 16:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:30:52.020 16:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:52.020 16:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:52.020 16:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:30:52.020 16:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:52.020 16:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:30:52.020 16:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:52.020 16:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.020 16:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:52.020 [2024-11-05 16:00:24.260318] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:52.020 16:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.020 16:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:52.020 "name": "Existed_Raid", 00:30:52.020 "aliases": [ 00:30:52.020 "7f42811d-045e-422f-997a-7c0837a2b7c8" 00:30:52.020 ], 00:30:52.020 "product_name": "Raid Volume", 00:30:52.020 "block_size": 512, 00:30:52.020 "num_blocks": 131072, 00:30:52.020 "uuid": "7f42811d-045e-422f-997a-7c0837a2b7c8", 00:30:52.020 "assigned_rate_limits": { 00:30:52.020 "rw_ios_per_sec": 0, 00:30:52.020 "rw_mbytes_per_sec": 0, 00:30:52.020 "r_mbytes_per_sec": 0, 00:30:52.020 "w_mbytes_per_sec": 0 00:30:52.020 }, 00:30:52.020 "claimed": false, 00:30:52.020 "zoned": false, 00:30:52.020 "supported_io_types": { 00:30:52.020 "read": true, 00:30:52.020 "write": true, 00:30:52.020 "unmap": false, 00:30:52.020 "flush": false, 00:30:52.020 "reset": true, 00:30:52.020 "nvme_admin": false, 00:30:52.020 "nvme_io": false, 00:30:52.020 "nvme_io_md": false, 00:30:52.020 "write_zeroes": true, 00:30:52.020 "zcopy": false, 00:30:52.020 "get_zone_info": false, 00:30:52.020 "zone_management": false, 00:30:52.020 "zone_append": false, 00:30:52.020 "compare": false, 00:30:52.020 "compare_and_write": false, 00:30:52.020 "abort": false, 00:30:52.020 "seek_hole": false, 00:30:52.020 "seek_data": false, 00:30:52.020 "copy": false, 00:30:52.020 "nvme_iov_md": false 00:30:52.020 }, 00:30:52.020 "driver_specific": { 00:30:52.020 "raid": { 00:30:52.020 "uuid": "7f42811d-045e-422f-997a-7c0837a2b7c8", 00:30:52.020 "strip_size_kb": 64, 00:30:52.020 "state": "online", 00:30:52.020 "raid_level": "raid5f", 00:30:52.020 "superblock": false, 00:30:52.020 "num_base_bdevs": 3, 00:30:52.020 "num_base_bdevs_discovered": 3, 00:30:52.020 "num_base_bdevs_operational": 3, 00:30:52.020 "base_bdevs_list": [ 00:30:52.020 { 00:30:52.020 "name": "NewBaseBdev", 00:30:52.020 "uuid": "91b0d409-1e03-4326-8e14-1c6d258c92f3", 00:30:52.020 "is_configured": true, 00:30:52.020 "data_offset": 0, 00:30:52.020 "data_size": 65536 00:30:52.020 }, 00:30:52.020 { 00:30:52.020 "name": "BaseBdev2", 00:30:52.020 "uuid": "84139abe-d032-4bac-ba40-8333ed3b12ed", 00:30:52.020 "is_configured": true, 00:30:52.020 "data_offset": 0, 00:30:52.020 "data_size": 65536 00:30:52.020 }, 00:30:52.020 { 00:30:52.020 "name": "BaseBdev3", 00:30:52.020 "uuid": "c290ea4e-b36d-4ced-b302-865a5e2329da", 00:30:52.020 "is_configured": true, 00:30:52.020 "data_offset": 0, 00:30:52.020 "data_size": 65536 00:30:52.020 } 00:30:52.020 ] 00:30:52.020 } 00:30:52.020 } 00:30:52.020 }' 00:30:52.020 16:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:52.020 16:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:30:52.020 BaseBdev2 00:30:52.020 BaseBdev3' 00:30:52.020 16:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:52.020 16:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:52.020 16:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:52.020 16:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:30:52.020 16:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.020 16:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:52.020 16:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:52.020 16:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.020 16:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:52.020 16:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:52.020 16:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:52.021 16:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:30:52.021 16:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:52.021 16:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.021 16:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:52.021 16:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.021 16:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:52.021 16:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:52.021 16:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:52.021 16:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:30:52.021 16:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.021 16:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:52.021 16:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:52.021 16:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.280 16:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:52.280 16:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:52.280 16:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:52.280 16:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.280 16:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:52.280 [2024-11-05 16:00:24.440163] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:52.280 [2024-11-05 16:00:24.440183] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:52.280 [2024-11-05 16:00:24.440237] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:52.280 [2024-11-05 16:00:24.440454] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:52.280 [2024-11-05 16:00:24.440464] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:30:52.280 16:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.280 16:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 77418 00:30:52.280 16:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 77418 ']' 00:30:52.280 16:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 77418 00:30:52.280 16:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:30:52.280 16:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:52.280 16:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77418 00:30:52.280 16:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:52.280 killing process with pid 77418 00:30:52.280 16:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:52.280 16:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77418' 00:30:52.280 16:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 77418 00:30:52.280 [2024-11-05 16:00:24.464990] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:52.280 16:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 77418 00:30:52.280 [2024-11-05 16:00:24.612466] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:30:52.845 00:30:52.845 real 0m7.358s 00:30:52.845 user 0m11.932s 00:30:52.845 sys 0m1.190s 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:52.845 ************************************ 00:30:52.845 END TEST raid5f_state_function_test 00:30:52.845 ************************************ 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:52.845 16:00:25 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:30:52.845 16:00:25 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:30:52.845 16:00:25 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:52.845 16:00:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:52.845 ************************************ 00:30:52.845 START TEST raid5f_state_function_test_sb 00:30:52.845 ************************************ 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 true 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:30:52.845 Process raid pid: 78002 00:30:52.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=78002 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78002' 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 78002 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 78002 ']' 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:52.845 16:00:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:53.103 [2024-11-05 16:00:25.290502] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:30:53.103 [2024-11-05 16:00:25.290623] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:53.103 [2024-11-05 16:00:25.448258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.362 [2024-11-05 16:00:25.529830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:53.362 [2024-11-05 16:00:25.639021] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:53.362 [2024-11-05 16:00:25.639059] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:53.928 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:53.928 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:30:53.928 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:30:53.928 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.928 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:53.928 [2024-11-05 16:00:26.136585] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:53.928 [2024-11-05 16:00:26.136726] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:53.928 [2024-11-05 16:00:26.136786] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:53.928 [2024-11-05 16:00:26.136809] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:53.928 [2024-11-05 16:00:26.136823] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:53.928 [2024-11-05 16:00:26.136848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:53.928 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.928 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:53.928 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:53.928 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:53.928 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:53.928 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:53.928 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:53.928 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:53.928 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:53.928 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:53.928 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:53.928 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:53.928 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.928 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:53.928 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:53.928 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.928 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:53.928 "name": "Existed_Raid", 00:30:53.928 "uuid": "864f21e4-24a3-41e0-a407-279b8e1014ef", 00:30:53.928 "strip_size_kb": 64, 00:30:53.928 "state": "configuring", 00:30:53.928 "raid_level": "raid5f", 00:30:53.928 "superblock": true, 00:30:53.928 "num_base_bdevs": 3, 00:30:53.928 "num_base_bdevs_discovered": 0, 00:30:53.928 "num_base_bdevs_operational": 3, 00:30:53.928 "base_bdevs_list": [ 00:30:53.928 { 00:30:53.928 "name": "BaseBdev1", 00:30:53.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:53.928 "is_configured": false, 00:30:53.928 "data_offset": 0, 00:30:53.928 "data_size": 0 00:30:53.928 }, 00:30:53.928 { 00:30:53.928 "name": "BaseBdev2", 00:30:53.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:53.928 "is_configured": false, 00:30:53.928 "data_offset": 0, 00:30:53.928 "data_size": 0 00:30:53.928 }, 00:30:53.928 { 00:30:53.928 "name": "BaseBdev3", 00:30:53.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:53.928 "is_configured": false, 00:30:53.928 "data_offset": 0, 00:30:53.928 "data_size": 0 00:30:53.928 } 00:30:53.928 ] 00:30:53.928 }' 00:30:53.928 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:53.928 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:54.188 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:54.188 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.188 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:54.188 [2024-11-05 16:00:26.464611] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:54.188 [2024-11-05 16:00:26.464714] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:30:54.188 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.188 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:30:54.188 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.188 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:54.188 [2024-11-05 16:00:26.472609] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:54.188 [2024-11-05 16:00:26.472710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:54.188 [2024-11-05 16:00:26.472757] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:54.188 [2024-11-05 16:00:26.472777] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:54.188 [2024-11-05 16:00:26.472823] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:54.188 [2024-11-05 16:00:26.472852] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:54.188 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.188 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:30:54.188 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.188 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:54.188 [2024-11-05 16:00:26.500205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:54.188 BaseBdev1 00:30:54.188 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.188 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:30:54.188 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:30:54.188 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:30:54.188 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:30:54.188 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:30:54.188 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:30:54.188 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:30:54.188 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.188 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:54.188 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.188 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:54.188 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.188 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:54.188 [ 00:30:54.188 { 00:30:54.188 "name": "BaseBdev1", 00:30:54.188 "aliases": [ 00:30:54.188 "51f3dee8-ccba-4817-bd45-661f8c738db2" 00:30:54.188 ], 00:30:54.188 "product_name": "Malloc disk", 00:30:54.188 "block_size": 512, 00:30:54.188 "num_blocks": 65536, 00:30:54.189 "uuid": "51f3dee8-ccba-4817-bd45-661f8c738db2", 00:30:54.189 "assigned_rate_limits": { 00:30:54.189 "rw_ios_per_sec": 0, 00:30:54.189 "rw_mbytes_per_sec": 0, 00:30:54.189 "r_mbytes_per_sec": 0, 00:30:54.189 "w_mbytes_per_sec": 0 00:30:54.189 }, 00:30:54.189 "claimed": true, 00:30:54.189 "claim_type": "exclusive_write", 00:30:54.189 "zoned": false, 00:30:54.189 "supported_io_types": { 00:30:54.189 "read": true, 00:30:54.189 "write": true, 00:30:54.189 "unmap": true, 00:30:54.189 "flush": true, 00:30:54.189 "reset": true, 00:30:54.189 "nvme_admin": false, 00:30:54.189 "nvme_io": false, 00:30:54.189 "nvme_io_md": false, 00:30:54.189 "write_zeroes": true, 00:30:54.189 "zcopy": true, 00:30:54.189 "get_zone_info": false, 00:30:54.189 "zone_management": false, 00:30:54.189 "zone_append": false, 00:30:54.189 "compare": false, 00:30:54.189 "compare_and_write": false, 00:30:54.189 "abort": true, 00:30:54.189 "seek_hole": false, 00:30:54.189 "seek_data": false, 00:30:54.189 "copy": true, 00:30:54.189 "nvme_iov_md": false 00:30:54.189 }, 00:30:54.189 "memory_domains": [ 00:30:54.189 { 00:30:54.189 "dma_device_id": "system", 00:30:54.189 "dma_device_type": 1 00:30:54.189 }, 00:30:54.189 { 00:30:54.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:54.189 "dma_device_type": 2 00:30:54.189 } 00:30:54.189 ], 00:30:54.189 "driver_specific": {} 00:30:54.189 } 00:30:54.189 ] 00:30:54.189 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.189 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:30:54.189 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:54.189 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:54.189 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:54.189 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:54.189 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:54.189 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:54.189 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:54.189 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:54.189 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:54.189 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:54.189 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:54.189 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:54.189 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.189 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:54.189 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.189 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:54.189 "name": "Existed_Raid", 00:30:54.189 "uuid": "cdfd6e23-ac3f-4f2e-9ad2-e58a1e915fed", 00:30:54.189 "strip_size_kb": 64, 00:30:54.189 "state": "configuring", 00:30:54.189 "raid_level": "raid5f", 00:30:54.189 "superblock": true, 00:30:54.189 "num_base_bdevs": 3, 00:30:54.189 "num_base_bdevs_discovered": 1, 00:30:54.189 "num_base_bdevs_operational": 3, 00:30:54.189 "base_bdevs_list": [ 00:30:54.189 { 00:30:54.189 "name": "BaseBdev1", 00:30:54.189 "uuid": "51f3dee8-ccba-4817-bd45-661f8c738db2", 00:30:54.189 "is_configured": true, 00:30:54.189 "data_offset": 2048, 00:30:54.189 "data_size": 63488 00:30:54.189 }, 00:30:54.189 { 00:30:54.189 "name": "BaseBdev2", 00:30:54.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:54.189 "is_configured": false, 00:30:54.189 "data_offset": 0, 00:30:54.189 "data_size": 0 00:30:54.189 }, 00:30:54.189 { 00:30:54.189 "name": "BaseBdev3", 00:30:54.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:54.189 "is_configured": false, 00:30:54.189 "data_offset": 0, 00:30:54.189 "data_size": 0 00:30:54.189 } 00:30:54.189 ] 00:30:54.189 }' 00:30:54.189 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:54.189 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:54.446 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:54.446 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.446 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:54.446 [2024-11-05 16:00:26.824300] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:54.446 [2024-11-05 16:00:26.824422] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:30:54.446 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.446 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:30:54.447 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.447 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:54.447 [2024-11-05 16:00:26.832341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:54.447 [2024-11-05 16:00:26.833965] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:54.447 [2024-11-05 16:00:26.834067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:54.447 [2024-11-05 16:00:26.834116] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:54.447 [2024-11-05 16:00:26.834137] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:54.447 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.447 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:30:54.447 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:54.447 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:54.447 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:54.447 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:54.447 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:54.447 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:54.447 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:54.447 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:54.447 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:54.447 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:54.447 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:54.447 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:54.447 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:54.447 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.447 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:54.447 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.705 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:54.705 "name": "Existed_Raid", 00:30:54.705 "uuid": "d0869d7d-0e39-42cf-96c7-401d15d2456c", 00:30:54.705 "strip_size_kb": 64, 00:30:54.705 "state": "configuring", 00:30:54.705 "raid_level": "raid5f", 00:30:54.705 "superblock": true, 00:30:54.705 "num_base_bdevs": 3, 00:30:54.705 "num_base_bdevs_discovered": 1, 00:30:54.705 "num_base_bdevs_operational": 3, 00:30:54.705 "base_bdevs_list": [ 00:30:54.705 { 00:30:54.705 "name": "BaseBdev1", 00:30:54.705 "uuid": "51f3dee8-ccba-4817-bd45-661f8c738db2", 00:30:54.705 "is_configured": true, 00:30:54.705 "data_offset": 2048, 00:30:54.705 "data_size": 63488 00:30:54.705 }, 00:30:54.705 { 00:30:54.705 "name": "BaseBdev2", 00:30:54.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:54.705 "is_configured": false, 00:30:54.705 "data_offset": 0, 00:30:54.705 "data_size": 0 00:30:54.705 }, 00:30:54.705 { 00:30:54.705 "name": "BaseBdev3", 00:30:54.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:54.705 "is_configured": false, 00:30:54.705 "data_offset": 0, 00:30:54.705 "data_size": 0 00:30:54.705 } 00:30:54.705 ] 00:30:54.705 }' 00:30:54.705 16:00:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:54.705 16:00:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:54.963 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:30:54.963 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.963 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:54.963 [2024-11-05 16:00:27.174457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:54.963 BaseBdev2 00:30:54.963 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.963 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:30:54.963 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:30:54.963 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:30:54.963 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:30:54.963 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:30:54.963 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:30:54.963 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:30:54.963 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.963 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:54.963 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.963 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:54.963 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.963 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:54.963 [ 00:30:54.963 { 00:30:54.963 "name": "BaseBdev2", 00:30:54.963 "aliases": [ 00:30:54.963 "152d5d9b-c95a-4328-a6ac-c81eed674104" 00:30:54.963 ], 00:30:54.964 "product_name": "Malloc disk", 00:30:54.964 "block_size": 512, 00:30:54.964 "num_blocks": 65536, 00:30:54.964 "uuid": "152d5d9b-c95a-4328-a6ac-c81eed674104", 00:30:54.964 "assigned_rate_limits": { 00:30:54.964 "rw_ios_per_sec": 0, 00:30:54.964 "rw_mbytes_per_sec": 0, 00:30:54.964 "r_mbytes_per_sec": 0, 00:30:54.964 "w_mbytes_per_sec": 0 00:30:54.964 }, 00:30:54.964 "claimed": true, 00:30:54.964 "claim_type": "exclusive_write", 00:30:54.964 "zoned": false, 00:30:54.964 "supported_io_types": { 00:30:54.964 "read": true, 00:30:54.964 "write": true, 00:30:54.964 "unmap": true, 00:30:54.964 "flush": true, 00:30:54.964 "reset": true, 00:30:54.964 "nvme_admin": false, 00:30:54.964 "nvme_io": false, 00:30:54.964 "nvme_io_md": false, 00:30:54.964 "write_zeroes": true, 00:30:54.964 "zcopy": true, 00:30:54.964 "get_zone_info": false, 00:30:54.964 "zone_management": false, 00:30:54.964 "zone_append": false, 00:30:54.964 "compare": false, 00:30:54.964 "compare_and_write": false, 00:30:54.964 "abort": true, 00:30:54.964 "seek_hole": false, 00:30:54.964 "seek_data": false, 00:30:54.964 "copy": true, 00:30:54.964 "nvme_iov_md": false 00:30:54.964 }, 00:30:54.964 "memory_domains": [ 00:30:54.964 { 00:30:54.964 "dma_device_id": "system", 00:30:54.964 "dma_device_type": 1 00:30:54.964 }, 00:30:54.964 { 00:30:54.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:54.964 "dma_device_type": 2 00:30:54.964 } 00:30:54.964 ], 00:30:54.964 "driver_specific": {} 00:30:54.964 } 00:30:54.964 ] 00:30:54.964 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.964 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:30:54.964 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:54.964 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:54.964 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:54.964 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:54.964 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:54.964 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:54.964 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:54.964 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:54.964 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:54.964 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:54.964 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:54.964 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:54.964 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:54.964 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:54.964 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.964 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:54.964 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.964 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:54.964 "name": "Existed_Raid", 00:30:54.964 "uuid": "d0869d7d-0e39-42cf-96c7-401d15d2456c", 00:30:54.964 "strip_size_kb": 64, 00:30:54.964 "state": "configuring", 00:30:54.964 "raid_level": "raid5f", 00:30:54.964 "superblock": true, 00:30:54.964 "num_base_bdevs": 3, 00:30:54.964 "num_base_bdevs_discovered": 2, 00:30:54.964 "num_base_bdevs_operational": 3, 00:30:54.964 "base_bdevs_list": [ 00:30:54.964 { 00:30:54.964 "name": "BaseBdev1", 00:30:54.964 "uuid": "51f3dee8-ccba-4817-bd45-661f8c738db2", 00:30:54.964 "is_configured": true, 00:30:54.964 "data_offset": 2048, 00:30:54.964 "data_size": 63488 00:30:54.964 }, 00:30:54.964 { 00:30:54.964 "name": "BaseBdev2", 00:30:54.964 "uuid": "152d5d9b-c95a-4328-a6ac-c81eed674104", 00:30:54.964 "is_configured": true, 00:30:54.964 "data_offset": 2048, 00:30:54.964 "data_size": 63488 00:30:54.964 }, 00:30:54.964 { 00:30:54.964 "name": "BaseBdev3", 00:30:54.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:54.964 "is_configured": false, 00:30:54.964 "data_offset": 0, 00:30:54.964 "data_size": 0 00:30:54.964 } 00:30:54.964 ] 00:30:54.964 }' 00:30:54.964 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:54.964 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:55.222 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:30:55.222 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.222 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:55.222 [2024-11-05 16:00:27.545339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:55.222 BaseBdev3 00:30:55.222 [2024-11-05 16:00:27.545705] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:30:55.222 [2024-11-05 16:00:27.545732] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:30:55.222 [2024-11-05 16:00:27.546020] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:30:55.222 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.222 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:30:55.222 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:30:55.222 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:30:55.222 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:30:55.222 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:30:55.222 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:30:55.222 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:30:55.222 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.222 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:55.222 [2024-11-05 16:00:27.549983] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:30:55.222 [2024-11-05 16:00:27.550077] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:30:55.222 [2024-11-05 16:00:27.550387] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:55.222 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.222 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:55.222 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.222 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:55.222 [ 00:30:55.222 { 00:30:55.222 "name": "BaseBdev3", 00:30:55.222 "aliases": [ 00:30:55.222 "045f0699-d0de-4d3d-a4a7-437e741a7d55" 00:30:55.222 ], 00:30:55.222 "product_name": "Malloc disk", 00:30:55.223 "block_size": 512, 00:30:55.223 "num_blocks": 65536, 00:30:55.223 "uuid": "045f0699-d0de-4d3d-a4a7-437e741a7d55", 00:30:55.223 "assigned_rate_limits": { 00:30:55.223 "rw_ios_per_sec": 0, 00:30:55.223 "rw_mbytes_per_sec": 0, 00:30:55.223 "r_mbytes_per_sec": 0, 00:30:55.223 "w_mbytes_per_sec": 0 00:30:55.223 }, 00:30:55.223 "claimed": true, 00:30:55.223 "claim_type": "exclusive_write", 00:30:55.223 "zoned": false, 00:30:55.223 "supported_io_types": { 00:30:55.223 "read": true, 00:30:55.223 "write": true, 00:30:55.223 "unmap": true, 00:30:55.223 "flush": true, 00:30:55.223 "reset": true, 00:30:55.223 "nvme_admin": false, 00:30:55.223 "nvme_io": false, 00:30:55.223 "nvme_io_md": false, 00:30:55.223 "write_zeroes": true, 00:30:55.223 "zcopy": true, 00:30:55.223 "get_zone_info": false, 00:30:55.223 "zone_management": false, 00:30:55.223 "zone_append": false, 00:30:55.223 "compare": false, 00:30:55.223 "compare_and_write": false, 00:30:55.223 "abort": true, 00:30:55.223 "seek_hole": false, 00:30:55.223 "seek_data": false, 00:30:55.223 "copy": true, 00:30:55.223 "nvme_iov_md": false 00:30:55.223 }, 00:30:55.223 "memory_domains": [ 00:30:55.223 { 00:30:55.223 "dma_device_id": "system", 00:30:55.223 "dma_device_type": 1 00:30:55.223 }, 00:30:55.223 { 00:30:55.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:55.223 "dma_device_type": 2 00:30:55.223 } 00:30:55.223 ], 00:30:55.223 "driver_specific": {} 00:30:55.223 } 00:30:55.223 ] 00:30:55.223 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.223 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:30:55.223 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:55.223 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:55.223 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:30:55.223 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:55.223 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:55.223 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:55.223 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:55.223 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:55.223 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:55.223 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:55.223 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:55.223 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:55.223 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:55.223 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:55.223 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.223 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:55.223 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.223 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:55.223 "name": "Existed_Raid", 00:30:55.223 "uuid": "d0869d7d-0e39-42cf-96c7-401d15d2456c", 00:30:55.223 "strip_size_kb": 64, 00:30:55.223 "state": "online", 00:30:55.223 "raid_level": "raid5f", 00:30:55.223 "superblock": true, 00:30:55.223 "num_base_bdevs": 3, 00:30:55.223 "num_base_bdevs_discovered": 3, 00:30:55.223 "num_base_bdevs_operational": 3, 00:30:55.223 "base_bdevs_list": [ 00:30:55.223 { 00:30:55.223 "name": "BaseBdev1", 00:30:55.223 "uuid": "51f3dee8-ccba-4817-bd45-661f8c738db2", 00:30:55.223 "is_configured": true, 00:30:55.223 "data_offset": 2048, 00:30:55.223 "data_size": 63488 00:30:55.223 }, 00:30:55.223 { 00:30:55.223 "name": "BaseBdev2", 00:30:55.223 "uuid": "152d5d9b-c95a-4328-a6ac-c81eed674104", 00:30:55.223 "is_configured": true, 00:30:55.223 "data_offset": 2048, 00:30:55.223 "data_size": 63488 00:30:55.223 }, 00:30:55.223 { 00:30:55.223 "name": "BaseBdev3", 00:30:55.223 "uuid": "045f0699-d0de-4d3d-a4a7-437e741a7d55", 00:30:55.223 "is_configured": true, 00:30:55.223 "data_offset": 2048, 00:30:55.223 "data_size": 63488 00:30:55.223 } 00:30:55.223 ] 00:30:55.223 }' 00:30:55.223 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:55.223 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:55.481 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:30:55.481 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:30:55.481 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:55.481 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:55.481 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:30:55.481 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:55.481 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:30:55.481 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:55.481 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.481 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:55.481 [2024-11-05 16:00:27.874741] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:55.481 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.740 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:55.740 "name": "Existed_Raid", 00:30:55.740 "aliases": [ 00:30:55.740 "d0869d7d-0e39-42cf-96c7-401d15d2456c" 00:30:55.740 ], 00:30:55.740 "product_name": "Raid Volume", 00:30:55.740 "block_size": 512, 00:30:55.740 "num_blocks": 126976, 00:30:55.740 "uuid": "d0869d7d-0e39-42cf-96c7-401d15d2456c", 00:30:55.740 "assigned_rate_limits": { 00:30:55.740 "rw_ios_per_sec": 0, 00:30:55.740 "rw_mbytes_per_sec": 0, 00:30:55.740 "r_mbytes_per_sec": 0, 00:30:55.740 "w_mbytes_per_sec": 0 00:30:55.740 }, 00:30:55.740 "claimed": false, 00:30:55.740 "zoned": false, 00:30:55.740 "supported_io_types": { 00:30:55.740 "read": true, 00:30:55.740 "write": true, 00:30:55.740 "unmap": false, 00:30:55.740 "flush": false, 00:30:55.740 "reset": true, 00:30:55.740 "nvme_admin": false, 00:30:55.740 "nvme_io": false, 00:30:55.740 "nvme_io_md": false, 00:30:55.740 "write_zeroes": true, 00:30:55.740 "zcopy": false, 00:30:55.740 "get_zone_info": false, 00:30:55.740 "zone_management": false, 00:30:55.740 "zone_append": false, 00:30:55.740 "compare": false, 00:30:55.740 "compare_and_write": false, 00:30:55.740 "abort": false, 00:30:55.740 "seek_hole": false, 00:30:55.740 "seek_data": false, 00:30:55.740 "copy": false, 00:30:55.740 "nvme_iov_md": false 00:30:55.740 }, 00:30:55.740 "driver_specific": { 00:30:55.740 "raid": { 00:30:55.740 "uuid": "d0869d7d-0e39-42cf-96c7-401d15d2456c", 00:30:55.740 "strip_size_kb": 64, 00:30:55.740 "state": "online", 00:30:55.740 "raid_level": "raid5f", 00:30:55.740 "superblock": true, 00:30:55.740 "num_base_bdevs": 3, 00:30:55.740 "num_base_bdevs_discovered": 3, 00:30:55.740 "num_base_bdevs_operational": 3, 00:30:55.740 "base_bdevs_list": [ 00:30:55.740 { 00:30:55.740 "name": "BaseBdev1", 00:30:55.740 "uuid": "51f3dee8-ccba-4817-bd45-661f8c738db2", 00:30:55.740 "is_configured": true, 00:30:55.740 "data_offset": 2048, 00:30:55.740 "data_size": 63488 00:30:55.740 }, 00:30:55.740 { 00:30:55.740 "name": "BaseBdev2", 00:30:55.740 "uuid": "152d5d9b-c95a-4328-a6ac-c81eed674104", 00:30:55.740 "is_configured": true, 00:30:55.740 "data_offset": 2048, 00:30:55.740 "data_size": 63488 00:30:55.740 }, 00:30:55.740 { 00:30:55.740 "name": "BaseBdev3", 00:30:55.740 "uuid": "045f0699-d0de-4d3d-a4a7-437e741a7d55", 00:30:55.740 "is_configured": true, 00:30:55.740 "data_offset": 2048, 00:30:55.740 "data_size": 63488 00:30:55.740 } 00:30:55.740 ] 00:30:55.740 } 00:30:55.740 } 00:30:55.740 }' 00:30:55.740 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:55.740 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:30:55.740 BaseBdev2 00:30:55.740 BaseBdev3' 00:30:55.740 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:55.740 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:55.740 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:55.740 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:30:55.740 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:55.740 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.740 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:55.740 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.740 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:55.740 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:55.740 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:55.740 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:30:55.740 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.741 16:00:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:55.741 16:00:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:55.741 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.741 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:55.741 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:55.741 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:55.741 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:55.741 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:30:55.741 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.741 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:55.741 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.741 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:55.741 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:55.741 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:30:55.741 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.741 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:55.741 [2024-11-05 16:00:28.062603] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:55.741 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.741 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:30:55.741 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:30:55.741 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:30:55.741 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:30:55.741 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:30:55.741 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:30:55.741 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:55.741 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:55.741 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:55.741 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:55.741 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:55.741 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:55.741 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:55.741 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:55.741 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:55.741 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:55.741 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:55.741 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.741 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:55.741 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.741 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:55.741 "name": "Existed_Raid", 00:30:55.741 "uuid": "d0869d7d-0e39-42cf-96c7-401d15d2456c", 00:30:55.741 "strip_size_kb": 64, 00:30:55.741 "state": "online", 00:30:55.741 "raid_level": "raid5f", 00:30:55.741 "superblock": true, 00:30:55.741 "num_base_bdevs": 3, 00:30:55.741 "num_base_bdevs_discovered": 2, 00:30:55.741 "num_base_bdevs_operational": 2, 00:30:55.741 "base_bdevs_list": [ 00:30:55.741 { 00:30:55.741 "name": null, 00:30:55.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:55.741 "is_configured": false, 00:30:55.741 "data_offset": 0, 00:30:55.741 "data_size": 63488 00:30:55.741 }, 00:30:55.741 { 00:30:55.741 "name": "BaseBdev2", 00:30:55.741 "uuid": "152d5d9b-c95a-4328-a6ac-c81eed674104", 00:30:55.741 "is_configured": true, 00:30:55.741 "data_offset": 2048, 00:30:55.741 "data_size": 63488 00:30:55.741 }, 00:30:55.741 { 00:30:55.741 "name": "BaseBdev3", 00:30:55.741 "uuid": "045f0699-d0de-4d3d-a4a7-437e741a7d55", 00:30:55.741 "is_configured": true, 00:30:55.741 "data_offset": 2048, 00:30:55.741 "data_size": 63488 00:30:55.741 } 00:30:55.741 ] 00:30:55.741 }' 00:30:55.741 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:55.741 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:56.307 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:30:56.307 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:56.307 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:56.307 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:56.307 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.307 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:56.307 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.307 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:56.307 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:56.307 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:30:56.307 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.307 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:56.307 [2024-11-05 16:00:28.476024] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:56.307 [2024-11-05 16:00:28.476245] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:56.307 [2024-11-05 16:00:28.535040] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:56.307 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.307 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:56.307 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:56.307 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:56.307 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.307 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:56.307 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:56.307 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.307 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:56.307 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:56.307 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:30:56.307 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.307 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:56.307 [2024-11-05 16:00:28.571101] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:56.307 [2024-11-05 16:00:28.571214] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:30:56.307 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.307 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:56.307 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:56.307 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:56.307 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.307 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:56.307 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:30:56.307 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.307 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:30:56.307 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:30:56.308 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:30:56.308 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:30:56.308 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:56.308 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:30:56.308 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.308 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:56.308 BaseBdev2 00:30:56.308 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.308 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:30:56.308 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:30:56.308 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:30:56.308 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:30:56.308 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:30:56.308 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:30:56.308 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:30:56.308 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.308 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:56.308 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.308 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:56.308 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.308 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:56.308 [ 00:30:56.308 { 00:30:56.308 "name": "BaseBdev2", 00:30:56.308 "aliases": [ 00:30:56.308 "bdd3ef17-48c6-43ad-b89c-8fa2a5a6510a" 00:30:56.308 ], 00:30:56.308 "product_name": "Malloc disk", 00:30:56.308 "block_size": 512, 00:30:56.308 "num_blocks": 65536, 00:30:56.308 "uuid": "bdd3ef17-48c6-43ad-b89c-8fa2a5a6510a", 00:30:56.308 "assigned_rate_limits": { 00:30:56.308 "rw_ios_per_sec": 0, 00:30:56.308 "rw_mbytes_per_sec": 0, 00:30:56.308 "r_mbytes_per_sec": 0, 00:30:56.308 "w_mbytes_per_sec": 0 00:30:56.308 }, 00:30:56.308 "claimed": false, 00:30:56.308 "zoned": false, 00:30:56.308 "supported_io_types": { 00:30:56.308 "read": true, 00:30:56.308 "write": true, 00:30:56.308 "unmap": true, 00:30:56.308 "flush": true, 00:30:56.308 "reset": true, 00:30:56.308 "nvme_admin": false, 00:30:56.308 "nvme_io": false, 00:30:56.308 "nvme_io_md": false, 00:30:56.308 "write_zeroes": true, 00:30:56.308 "zcopy": true, 00:30:56.308 "get_zone_info": false, 00:30:56.308 "zone_management": false, 00:30:56.308 "zone_append": false, 00:30:56.308 "compare": false, 00:30:56.308 "compare_and_write": false, 00:30:56.308 "abort": true, 00:30:56.308 "seek_hole": false, 00:30:56.308 "seek_data": false, 00:30:56.308 "copy": true, 00:30:56.308 "nvme_iov_md": false 00:30:56.308 }, 00:30:56.308 "memory_domains": [ 00:30:56.308 { 00:30:56.308 "dma_device_id": "system", 00:30:56.308 "dma_device_type": 1 00:30:56.308 }, 00:30:56.308 { 00:30:56.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:56.308 "dma_device_type": 2 00:30:56.308 } 00:30:56.308 ], 00:30:56.308 "driver_specific": {} 00:30:56.308 } 00:30:56.308 ] 00:30:56.308 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.308 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:30:56.308 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:56.308 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:56.308 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:30:56.308 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.308 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:56.566 BaseBdev3 00:30:56.566 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.566 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:30:56.567 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:30:56.567 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:30:56.567 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:30:56.567 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:30:56.567 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:30:56.567 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:30:56.567 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.567 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:56.567 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.567 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:56.567 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.567 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:56.567 [ 00:30:56.567 { 00:30:56.567 "name": "BaseBdev3", 00:30:56.567 "aliases": [ 00:30:56.567 "40afd5e3-7775-45f9-abe3-bcd78db33cd8" 00:30:56.567 ], 00:30:56.567 "product_name": "Malloc disk", 00:30:56.567 "block_size": 512, 00:30:56.567 "num_blocks": 65536, 00:30:56.567 "uuid": "40afd5e3-7775-45f9-abe3-bcd78db33cd8", 00:30:56.567 "assigned_rate_limits": { 00:30:56.567 "rw_ios_per_sec": 0, 00:30:56.567 "rw_mbytes_per_sec": 0, 00:30:56.567 "r_mbytes_per_sec": 0, 00:30:56.567 "w_mbytes_per_sec": 0 00:30:56.567 }, 00:30:56.567 "claimed": false, 00:30:56.567 "zoned": false, 00:30:56.567 "supported_io_types": { 00:30:56.567 "read": true, 00:30:56.567 "write": true, 00:30:56.567 "unmap": true, 00:30:56.567 "flush": true, 00:30:56.567 "reset": true, 00:30:56.567 "nvme_admin": false, 00:30:56.567 "nvme_io": false, 00:30:56.567 "nvme_io_md": false, 00:30:56.567 "write_zeroes": true, 00:30:56.567 "zcopy": true, 00:30:56.567 "get_zone_info": false, 00:30:56.567 "zone_management": false, 00:30:56.567 "zone_append": false, 00:30:56.567 "compare": false, 00:30:56.567 "compare_and_write": false, 00:30:56.567 "abort": true, 00:30:56.567 "seek_hole": false, 00:30:56.567 "seek_data": false, 00:30:56.567 "copy": true, 00:30:56.567 "nvme_iov_md": false 00:30:56.567 }, 00:30:56.567 "memory_domains": [ 00:30:56.567 { 00:30:56.567 "dma_device_id": "system", 00:30:56.567 "dma_device_type": 1 00:30:56.567 }, 00:30:56.567 { 00:30:56.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:56.567 "dma_device_type": 2 00:30:56.567 } 00:30:56.567 ], 00:30:56.567 "driver_specific": {} 00:30:56.567 } 00:30:56.567 ] 00:30:56.567 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.567 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:30:56.567 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:56.567 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:56.567 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:30:56.567 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.567 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:56.567 [2024-11-05 16:00:28.760665] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:56.567 [2024-11-05 16:00:28.760708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:56.567 [2024-11-05 16:00:28.760729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:56.567 [2024-11-05 16:00:28.762541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:56.567 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.567 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:56.567 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:56.567 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:56.567 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:56.567 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:56.567 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:56.567 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:56.567 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:56.567 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:56.567 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:56.567 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:56.567 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:56.567 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.567 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:56.567 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.567 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:56.567 "name": "Existed_Raid", 00:30:56.567 "uuid": "d4913ca7-90bf-4fcf-ad57-377efc9791db", 00:30:56.567 "strip_size_kb": 64, 00:30:56.567 "state": "configuring", 00:30:56.567 "raid_level": "raid5f", 00:30:56.567 "superblock": true, 00:30:56.567 "num_base_bdevs": 3, 00:30:56.567 "num_base_bdevs_discovered": 2, 00:30:56.567 "num_base_bdevs_operational": 3, 00:30:56.567 "base_bdevs_list": [ 00:30:56.567 { 00:30:56.567 "name": "BaseBdev1", 00:30:56.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:56.567 "is_configured": false, 00:30:56.567 "data_offset": 0, 00:30:56.567 "data_size": 0 00:30:56.567 }, 00:30:56.567 { 00:30:56.567 "name": "BaseBdev2", 00:30:56.567 "uuid": "bdd3ef17-48c6-43ad-b89c-8fa2a5a6510a", 00:30:56.567 "is_configured": true, 00:30:56.567 "data_offset": 2048, 00:30:56.567 "data_size": 63488 00:30:56.567 }, 00:30:56.567 { 00:30:56.567 "name": "BaseBdev3", 00:30:56.567 "uuid": "40afd5e3-7775-45f9-abe3-bcd78db33cd8", 00:30:56.567 "is_configured": true, 00:30:56.567 "data_offset": 2048, 00:30:56.567 "data_size": 63488 00:30:56.567 } 00:30:56.567 ] 00:30:56.567 }' 00:30:56.567 16:00:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:56.567 16:00:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:56.825 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:30:56.825 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.825 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:56.825 [2024-11-05 16:00:29.084727] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:56.825 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.826 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:56.826 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:56.826 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:56.826 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:56.826 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:56.826 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:56.826 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:56.826 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:56.826 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:56.826 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:56.826 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:56.826 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:56.826 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.826 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:56.826 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.826 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:56.826 "name": "Existed_Raid", 00:30:56.826 "uuid": "d4913ca7-90bf-4fcf-ad57-377efc9791db", 00:30:56.826 "strip_size_kb": 64, 00:30:56.826 "state": "configuring", 00:30:56.826 "raid_level": "raid5f", 00:30:56.826 "superblock": true, 00:30:56.826 "num_base_bdevs": 3, 00:30:56.826 "num_base_bdevs_discovered": 1, 00:30:56.826 "num_base_bdevs_operational": 3, 00:30:56.826 "base_bdevs_list": [ 00:30:56.826 { 00:30:56.826 "name": "BaseBdev1", 00:30:56.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:56.826 "is_configured": false, 00:30:56.826 "data_offset": 0, 00:30:56.826 "data_size": 0 00:30:56.826 }, 00:30:56.826 { 00:30:56.826 "name": null, 00:30:56.826 "uuid": "bdd3ef17-48c6-43ad-b89c-8fa2a5a6510a", 00:30:56.826 "is_configured": false, 00:30:56.826 "data_offset": 0, 00:30:56.826 "data_size": 63488 00:30:56.826 }, 00:30:56.826 { 00:30:56.826 "name": "BaseBdev3", 00:30:56.826 "uuid": "40afd5e3-7775-45f9-abe3-bcd78db33cd8", 00:30:56.826 "is_configured": true, 00:30:56.826 "data_offset": 2048, 00:30:56.826 "data_size": 63488 00:30:56.826 } 00:30:56.826 ] 00:30:56.826 }' 00:30:56.826 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:56.826 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:57.084 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:57.084 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.084 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:57.084 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:57.084 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.084 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:30:57.084 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:30:57.084 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.084 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:57.084 [2024-11-05 16:00:29.458681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:57.084 BaseBdev1 00:30:57.084 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.084 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:30:57.084 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:30:57.084 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:30:57.084 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:30:57.084 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:30:57.084 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:30:57.084 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:30:57.084 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.084 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:57.084 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.084 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:57.084 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.084 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:57.084 [ 00:30:57.084 { 00:30:57.084 "name": "BaseBdev1", 00:30:57.084 "aliases": [ 00:30:57.084 "712023d2-6274-4f18-9bf6-7efecccfae7a" 00:30:57.084 ], 00:30:57.084 "product_name": "Malloc disk", 00:30:57.084 "block_size": 512, 00:30:57.084 "num_blocks": 65536, 00:30:57.084 "uuid": "712023d2-6274-4f18-9bf6-7efecccfae7a", 00:30:57.084 "assigned_rate_limits": { 00:30:57.084 "rw_ios_per_sec": 0, 00:30:57.084 "rw_mbytes_per_sec": 0, 00:30:57.084 "r_mbytes_per_sec": 0, 00:30:57.084 "w_mbytes_per_sec": 0 00:30:57.084 }, 00:30:57.084 "claimed": true, 00:30:57.084 "claim_type": "exclusive_write", 00:30:57.084 "zoned": false, 00:30:57.084 "supported_io_types": { 00:30:57.084 "read": true, 00:30:57.084 "write": true, 00:30:57.084 "unmap": true, 00:30:57.084 "flush": true, 00:30:57.084 "reset": true, 00:30:57.084 "nvme_admin": false, 00:30:57.084 "nvme_io": false, 00:30:57.084 "nvme_io_md": false, 00:30:57.084 "write_zeroes": true, 00:30:57.084 "zcopy": true, 00:30:57.084 "get_zone_info": false, 00:30:57.084 "zone_management": false, 00:30:57.084 "zone_append": false, 00:30:57.084 "compare": false, 00:30:57.084 "compare_and_write": false, 00:30:57.084 "abort": true, 00:30:57.084 "seek_hole": false, 00:30:57.084 "seek_data": false, 00:30:57.084 "copy": true, 00:30:57.084 "nvme_iov_md": false 00:30:57.084 }, 00:30:57.084 "memory_domains": [ 00:30:57.084 { 00:30:57.084 "dma_device_id": "system", 00:30:57.084 "dma_device_type": 1 00:30:57.084 }, 00:30:57.084 { 00:30:57.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:57.084 "dma_device_type": 2 00:30:57.084 } 00:30:57.084 ], 00:30:57.084 "driver_specific": {} 00:30:57.084 } 00:30:57.084 ] 00:30:57.084 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.084 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:30:57.084 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:57.084 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:57.084 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:57.084 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:57.084 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:57.084 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:57.084 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:57.085 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:57.085 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:57.085 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:57.085 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:57.085 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:57.085 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.085 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:57.085 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.342 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:57.342 "name": "Existed_Raid", 00:30:57.342 "uuid": "d4913ca7-90bf-4fcf-ad57-377efc9791db", 00:30:57.342 "strip_size_kb": 64, 00:30:57.342 "state": "configuring", 00:30:57.342 "raid_level": "raid5f", 00:30:57.342 "superblock": true, 00:30:57.342 "num_base_bdevs": 3, 00:30:57.342 "num_base_bdevs_discovered": 2, 00:30:57.342 "num_base_bdevs_operational": 3, 00:30:57.342 "base_bdevs_list": [ 00:30:57.342 { 00:30:57.342 "name": "BaseBdev1", 00:30:57.342 "uuid": "712023d2-6274-4f18-9bf6-7efecccfae7a", 00:30:57.342 "is_configured": true, 00:30:57.342 "data_offset": 2048, 00:30:57.342 "data_size": 63488 00:30:57.342 }, 00:30:57.342 { 00:30:57.342 "name": null, 00:30:57.342 "uuid": "bdd3ef17-48c6-43ad-b89c-8fa2a5a6510a", 00:30:57.342 "is_configured": false, 00:30:57.342 "data_offset": 0, 00:30:57.342 "data_size": 63488 00:30:57.342 }, 00:30:57.342 { 00:30:57.342 "name": "BaseBdev3", 00:30:57.342 "uuid": "40afd5e3-7775-45f9-abe3-bcd78db33cd8", 00:30:57.342 "is_configured": true, 00:30:57.342 "data_offset": 2048, 00:30:57.342 "data_size": 63488 00:30:57.342 } 00:30:57.342 ] 00:30:57.342 }' 00:30:57.342 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:57.342 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:57.600 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:57.600 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:57.600 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.600 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:57.600 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.600 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:30:57.600 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:30:57.600 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.600 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:57.600 [2024-11-05 16:00:29.846793] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:57.600 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.600 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:57.600 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:57.600 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:57.600 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:57.600 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:57.600 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:57.600 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:57.600 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:57.600 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:57.600 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:57.600 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:57.600 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:57.600 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.600 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:57.600 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.600 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:57.600 "name": "Existed_Raid", 00:30:57.600 "uuid": "d4913ca7-90bf-4fcf-ad57-377efc9791db", 00:30:57.600 "strip_size_kb": 64, 00:30:57.600 "state": "configuring", 00:30:57.600 "raid_level": "raid5f", 00:30:57.600 "superblock": true, 00:30:57.600 "num_base_bdevs": 3, 00:30:57.600 "num_base_bdevs_discovered": 1, 00:30:57.600 "num_base_bdevs_operational": 3, 00:30:57.600 "base_bdevs_list": [ 00:30:57.600 { 00:30:57.600 "name": "BaseBdev1", 00:30:57.600 "uuid": "712023d2-6274-4f18-9bf6-7efecccfae7a", 00:30:57.600 "is_configured": true, 00:30:57.600 "data_offset": 2048, 00:30:57.600 "data_size": 63488 00:30:57.600 }, 00:30:57.600 { 00:30:57.600 "name": null, 00:30:57.600 "uuid": "bdd3ef17-48c6-43ad-b89c-8fa2a5a6510a", 00:30:57.600 "is_configured": false, 00:30:57.600 "data_offset": 0, 00:30:57.600 "data_size": 63488 00:30:57.600 }, 00:30:57.600 { 00:30:57.600 "name": null, 00:30:57.600 "uuid": "40afd5e3-7775-45f9-abe3-bcd78db33cd8", 00:30:57.600 "is_configured": false, 00:30:57.600 "data_offset": 0, 00:30:57.600 "data_size": 63488 00:30:57.600 } 00:30:57.600 ] 00:30:57.600 }' 00:30:57.600 16:00:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:57.600 16:00:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:57.858 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:57.858 16:00:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.858 16:00:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:57.858 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:57.858 16:00:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.858 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:30:57.858 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:30:57.858 16:00:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.858 16:00:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:57.858 [2024-11-05 16:00:30.206898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:57.858 16:00:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.858 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:57.858 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:57.858 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:57.858 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:57.858 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:57.858 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:57.858 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:57.858 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:57.858 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:57.858 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:57.858 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:57.858 16:00:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.858 16:00:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:57.858 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:57.858 16:00:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.858 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:57.858 "name": "Existed_Raid", 00:30:57.858 "uuid": "d4913ca7-90bf-4fcf-ad57-377efc9791db", 00:30:57.858 "strip_size_kb": 64, 00:30:57.858 "state": "configuring", 00:30:57.858 "raid_level": "raid5f", 00:30:57.858 "superblock": true, 00:30:57.858 "num_base_bdevs": 3, 00:30:57.858 "num_base_bdevs_discovered": 2, 00:30:57.858 "num_base_bdevs_operational": 3, 00:30:57.858 "base_bdevs_list": [ 00:30:57.858 { 00:30:57.858 "name": "BaseBdev1", 00:30:57.858 "uuid": "712023d2-6274-4f18-9bf6-7efecccfae7a", 00:30:57.858 "is_configured": true, 00:30:57.858 "data_offset": 2048, 00:30:57.858 "data_size": 63488 00:30:57.858 }, 00:30:57.858 { 00:30:57.858 "name": null, 00:30:57.858 "uuid": "bdd3ef17-48c6-43ad-b89c-8fa2a5a6510a", 00:30:57.858 "is_configured": false, 00:30:57.858 "data_offset": 0, 00:30:57.858 "data_size": 63488 00:30:57.858 }, 00:30:57.858 { 00:30:57.858 "name": "BaseBdev3", 00:30:57.858 "uuid": "40afd5e3-7775-45f9-abe3-bcd78db33cd8", 00:30:57.858 "is_configured": true, 00:30:57.858 "data_offset": 2048, 00:30:57.858 "data_size": 63488 00:30:57.859 } 00:30:57.859 ] 00:30:57.859 }' 00:30:57.859 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:57.859 16:00:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:58.117 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:58.117 16:00:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.117 16:00:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:58.117 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:58.117 16:00:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.375 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:30:58.375 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:30:58.375 16:00:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.375 16:00:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:58.375 [2024-11-05 16:00:30.550959] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:58.375 16:00:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.375 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:58.375 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:58.375 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:58.375 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:58.375 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:58.375 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:58.375 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:58.375 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:58.375 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:58.375 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:58.375 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:58.375 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:58.375 16:00:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.375 16:00:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:58.375 16:00:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.375 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:58.375 "name": "Existed_Raid", 00:30:58.375 "uuid": "d4913ca7-90bf-4fcf-ad57-377efc9791db", 00:30:58.375 "strip_size_kb": 64, 00:30:58.375 "state": "configuring", 00:30:58.375 "raid_level": "raid5f", 00:30:58.375 "superblock": true, 00:30:58.375 "num_base_bdevs": 3, 00:30:58.375 "num_base_bdevs_discovered": 1, 00:30:58.375 "num_base_bdevs_operational": 3, 00:30:58.375 "base_bdevs_list": [ 00:30:58.375 { 00:30:58.375 "name": null, 00:30:58.375 "uuid": "712023d2-6274-4f18-9bf6-7efecccfae7a", 00:30:58.375 "is_configured": false, 00:30:58.375 "data_offset": 0, 00:30:58.375 "data_size": 63488 00:30:58.375 }, 00:30:58.375 { 00:30:58.375 "name": null, 00:30:58.375 "uuid": "bdd3ef17-48c6-43ad-b89c-8fa2a5a6510a", 00:30:58.375 "is_configured": false, 00:30:58.375 "data_offset": 0, 00:30:58.375 "data_size": 63488 00:30:58.375 }, 00:30:58.375 { 00:30:58.375 "name": "BaseBdev3", 00:30:58.375 "uuid": "40afd5e3-7775-45f9-abe3-bcd78db33cd8", 00:30:58.375 "is_configured": true, 00:30:58.375 "data_offset": 2048, 00:30:58.375 "data_size": 63488 00:30:58.375 } 00:30:58.375 ] 00:30:58.375 }' 00:30:58.375 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:58.375 16:00:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:58.634 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:58.634 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:58.634 16:00:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.634 16:00:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:58.634 16:00:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.634 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:30:58.634 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:30:58.634 16:00:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.634 16:00:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:58.634 [2024-11-05 16:00:30.955985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:58.634 16:00:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.634 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:58.634 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:58.634 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:58.634 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:58.634 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:58.634 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:58.634 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:58.634 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:58.634 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:58.634 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:58.634 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:58.634 16:00:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.634 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:58.634 16:00:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:58.634 16:00:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.634 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:58.634 "name": "Existed_Raid", 00:30:58.634 "uuid": "d4913ca7-90bf-4fcf-ad57-377efc9791db", 00:30:58.634 "strip_size_kb": 64, 00:30:58.634 "state": "configuring", 00:30:58.634 "raid_level": "raid5f", 00:30:58.634 "superblock": true, 00:30:58.634 "num_base_bdevs": 3, 00:30:58.634 "num_base_bdevs_discovered": 2, 00:30:58.634 "num_base_bdevs_operational": 3, 00:30:58.634 "base_bdevs_list": [ 00:30:58.634 { 00:30:58.634 "name": null, 00:30:58.634 "uuid": "712023d2-6274-4f18-9bf6-7efecccfae7a", 00:30:58.634 "is_configured": false, 00:30:58.634 "data_offset": 0, 00:30:58.634 "data_size": 63488 00:30:58.634 }, 00:30:58.634 { 00:30:58.634 "name": "BaseBdev2", 00:30:58.634 "uuid": "bdd3ef17-48c6-43ad-b89c-8fa2a5a6510a", 00:30:58.634 "is_configured": true, 00:30:58.634 "data_offset": 2048, 00:30:58.634 "data_size": 63488 00:30:58.634 }, 00:30:58.634 { 00:30:58.634 "name": "BaseBdev3", 00:30:58.634 "uuid": "40afd5e3-7775-45f9-abe3-bcd78db33cd8", 00:30:58.634 "is_configured": true, 00:30:58.634 "data_offset": 2048, 00:30:58.634 "data_size": 63488 00:30:58.634 } 00:30:58.634 ] 00:30:58.634 }' 00:30:58.634 16:00:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:58.634 16:00:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:58.892 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:58.892 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.892 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:58.892 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:58.892 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.892 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:30:58.892 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:30:58.892 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:58.892 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.892 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:58.892 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.151 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 712023d2-6274-4f18-9bf6-7efecccfae7a 00:30:59.151 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.151 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:59.151 [2024-11-05 16:00:31.341925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:30:59.151 [2024-11-05 16:00:31.342070] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:30:59.151 [2024-11-05 16:00:31.342082] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:30:59.151 [2024-11-05 16:00:31.342271] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:30:59.151 NewBaseBdev 00:30:59.151 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.151 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:30:59.151 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:30:59.151 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:30:59.151 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:30:59.151 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:30:59.151 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:30:59.151 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:30:59.151 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.151 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:59.151 [2024-11-05 16:00:31.345146] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:30:59.151 [2024-11-05 16:00:31.345157] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:30:59.151 [2024-11-05 16:00:31.345255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:59.151 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.151 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:30:59.151 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.151 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:59.151 [ 00:30:59.151 { 00:30:59.151 "name": "NewBaseBdev", 00:30:59.151 "aliases": [ 00:30:59.151 "712023d2-6274-4f18-9bf6-7efecccfae7a" 00:30:59.151 ], 00:30:59.151 "product_name": "Malloc disk", 00:30:59.151 "block_size": 512, 00:30:59.151 "num_blocks": 65536, 00:30:59.151 "uuid": "712023d2-6274-4f18-9bf6-7efecccfae7a", 00:30:59.151 "assigned_rate_limits": { 00:30:59.151 "rw_ios_per_sec": 0, 00:30:59.151 "rw_mbytes_per_sec": 0, 00:30:59.151 "r_mbytes_per_sec": 0, 00:30:59.151 "w_mbytes_per_sec": 0 00:30:59.151 }, 00:30:59.151 "claimed": true, 00:30:59.151 "claim_type": "exclusive_write", 00:30:59.151 "zoned": false, 00:30:59.151 "supported_io_types": { 00:30:59.151 "read": true, 00:30:59.151 "write": true, 00:30:59.151 "unmap": true, 00:30:59.151 "flush": true, 00:30:59.151 "reset": true, 00:30:59.151 "nvme_admin": false, 00:30:59.151 "nvme_io": false, 00:30:59.151 "nvme_io_md": false, 00:30:59.151 "write_zeroes": true, 00:30:59.151 "zcopy": true, 00:30:59.151 "get_zone_info": false, 00:30:59.151 "zone_management": false, 00:30:59.151 "zone_append": false, 00:30:59.151 "compare": false, 00:30:59.151 "compare_and_write": false, 00:30:59.151 "abort": true, 00:30:59.151 "seek_hole": false, 00:30:59.151 "seek_data": false, 00:30:59.151 "copy": true, 00:30:59.151 "nvme_iov_md": false 00:30:59.151 }, 00:30:59.151 "memory_domains": [ 00:30:59.151 { 00:30:59.151 "dma_device_id": "system", 00:30:59.151 "dma_device_type": 1 00:30:59.151 }, 00:30:59.151 { 00:30:59.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:59.151 "dma_device_type": 2 00:30:59.151 } 00:30:59.151 ], 00:30:59.151 "driver_specific": {} 00:30:59.151 } 00:30:59.151 ] 00:30:59.151 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.151 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:30:59.151 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:30:59.151 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:59.151 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:59.151 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:59.151 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:59.151 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:59.151 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:59.151 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:59.151 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:59.151 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:59.151 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:59.151 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.151 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:59.152 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:59.152 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.152 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:59.152 "name": "Existed_Raid", 00:30:59.152 "uuid": "d4913ca7-90bf-4fcf-ad57-377efc9791db", 00:30:59.152 "strip_size_kb": 64, 00:30:59.152 "state": "online", 00:30:59.152 "raid_level": "raid5f", 00:30:59.152 "superblock": true, 00:30:59.152 "num_base_bdevs": 3, 00:30:59.152 "num_base_bdevs_discovered": 3, 00:30:59.152 "num_base_bdevs_operational": 3, 00:30:59.152 "base_bdevs_list": [ 00:30:59.152 { 00:30:59.152 "name": "NewBaseBdev", 00:30:59.152 "uuid": "712023d2-6274-4f18-9bf6-7efecccfae7a", 00:30:59.152 "is_configured": true, 00:30:59.152 "data_offset": 2048, 00:30:59.152 "data_size": 63488 00:30:59.152 }, 00:30:59.152 { 00:30:59.152 "name": "BaseBdev2", 00:30:59.152 "uuid": "bdd3ef17-48c6-43ad-b89c-8fa2a5a6510a", 00:30:59.152 "is_configured": true, 00:30:59.152 "data_offset": 2048, 00:30:59.152 "data_size": 63488 00:30:59.152 }, 00:30:59.152 { 00:30:59.152 "name": "BaseBdev3", 00:30:59.152 "uuid": "40afd5e3-7775-45f9-abe3-bcd78db33cd8", 00:30:59.152 "is_configured": true, 00:30:59.152 "data_offset": 2048, 00:30:59.152 "data_size": 63488 00:30:59.152 } 00:30:59.152 ] 00:30:59.152 }' 00:30:59.152 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:59.152 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:59.410 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:30:59.410 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:30:59.410 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:59.410 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:59.410 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:30:59.411 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:59.411 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:30:59.411 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.411 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:59.411 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:59.411 [2024-11-05 16:00:31.676595] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:59.411 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.411 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:59.411 "name": "Existed_Raid", 00:30:59.411 "aliases": [ 00:30:59.411 "d4913ca7-90bf-4fcf-ad57-377efc9791db" 00:30:59.411 ], 00:30:59.411 "product_name": "Raid Volume", 00:30:59.411 "block_size": 512, 00:30:59.411 "num_blocks": 126976, 00:30:59.411 "uuid": "d4913ca7-90bf-4fcf-ad57-377efc9791db", 00:30:59.411 "assigned_rate_limits": { 00:30:59.411 "rw_ios_per_sec": 0, 00:30:59.411 "rw_mbytes_per_sec": 0, 00:30:59.411 "r_mbytes_per_sec": 0, 00:30:59.411 "w_mbytes_per_sec": 0 00:30:59.411 }, 00:30:59.411 "claimed": false, 00:30:59.411 "zoned": false, 00:30:59.411 "supported_io_types": { 00:30:59.411 "read": true, 00:30:59.411 "write": true, 00:30:59.411 "unmap": false, 00:30:59.411 "flush": false, 00:30:59.411 "reset": true, 00:30:59.411 "nvme_admin": false, 00:30:59.411 "nvme_io": false, 00:30:59.411 "nvme_io_md": false, 00:30:59.411 "write_zeroes": true, 00:30:59.411 "zcopy": false, 00:30:59.411 "get_zone_info": false, 00:30:59.411 "zone_management": false, 00:30:59.411 "zone_append": false, 00:30:59.411 "compare": false, 00:30:59.411 "compare_and_write": false, 00:30:59.411 "abort": false, 00:30:59.411 "seek_hole": false, 00:30:59.411 "seek_data": false, 00:30:59.411 "copy": false, 00:30:59.411 "nvme_iov_md": false 00:30:59.411 }, 00:30:59.411 "driver_specific": { 00:30:59.411 "raid": { 00:30:59.411 "uuid": "d4913ca7-90bf-4fcf-ad57-377efc9791db", 00:30:59.411 "strip_size_kb": 64, 00:30:59.411 "state": "online", 00:30:59.411 "raid_level": "raid5f", 00:30:59.411 "superblock": true, 00:30:59.411 "num_base_bdevs": 3, 00:30:59.411 "num_base_bdevs_discovered": 3, 00:30:59.411 "num_base_bdevs_operational": 3, 00:30:59.411 "base_bdevs_list": [ 00:30:59.411 { 00:30:59.411 "name": "NewBaseBdev", 00:30:59.411 "uuid": "712023d2-6274-4f18-9bf6-7efecccfae7a", 00:30:59.411 "is_configured": true, 00:30:59.411 "data_offset": 2048, 00:30:59.411 "data_size": 63488 00:30:59.411 }, 00:30:59.411 { 00:30:59.411 "name": "BaseBdev2", 00:30:59.411 "uuid": "bdd3ef17-48c6-43ad-b89c-8fa2a5a6510a", 00:30:59.411 "is_configured": true, 00:30:59.411 "data_offset": 2048, 00:30:59.411 "data_size": 63488 00:30:59.411 }, 00:30:59.411 { 00:30:59.411 "name": "BaseBdev3", 00:30:59.411 "uuid": "40afd5e3-7775-45f9-abe3-bcd78db33cd8", 00:30:59.411 "is_configured": true, 00:30:59.411 "data_offset": 2048, 00:30:59.411 "data_size": 63488 00:30:59.411 } 00:30:59.411 ] 00:30:59.411 } 00:30:59.411 } 00:30:59.411 }' 00:30:59.411 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:59.411 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:30:59.411 BaseBdev2 00:30:59.411 BaseBdev3' 00:30:59.411 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:59.411 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:59.411 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:59.411 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:30:59.411 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.411 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:59.411 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:59.411 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.411 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:59.411 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:59.411 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:59.411 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:59.411 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:30:59.411 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.411 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:59.411 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.411 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:59.411 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:59.411 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:59.411 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:59.411 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:30:59.411 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.411 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:59.670 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.670 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:59.670 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:59.670 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:59.670 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.670 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:59.670 [2024-11-05 16:00:31.844460] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:59.670 [2024-11-05 16:00:31.844480] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:59.670 [2024-11-05 16:00:31.844532] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:59.670 [2024-11-05 16:00:31.844747] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:59.670 [2024-11-05 16:00:31.844762] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:30:59.670 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.670 16:00:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 78002 00:30:59.670 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 78002 ']' 00:30:59.670 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 78002 00:30:59.670 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:30:59.670 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:59.670 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78002 00:30:59.670 killing process with pid 78002 00:30:59.670 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:59.670 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:59.670 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78002' 00:30:59.670 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 78002 00:30:59.670 [2024-11-05 16:00:31.868692] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:59.670 16:00:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 78002 00:30:59.670 [2024-11-05 16:00:32.013803] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:00.237 16:00:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:31:00.237 00:31:00.237 real 0m7.350s 00:31:00.237 user 0m11.888s 00:31:00.237 sys 0m1.160s 00:31:00.237 16:00:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:00.237 16:00:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:00.237 ************************************ 00:31:00.237 END TEST raid5f_state_function_test_sb 00:31:00.237 ************************************ 00:31:00.237 16:00:32 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:31:00.237 16:00:32 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:00.237 16:00:32 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:00.237 16:00:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:00.237 ************************************ 00:31:00.237 START TEST raid5f_superblock_test 00:31:00.237 ************************************ 00:31:00.237 16:00:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 3 00:31:00.237 16:00:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:31:00.237 16:00:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:31:00.237 16:00:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:31:00.237 16:00:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:31:00.237 16:00:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:31:00.237 16:00:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:31:00.237 16:00:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:31:00.237 16:00:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:31:00.237 16:00:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:31:00.237 16:00:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:31:00.237 16:00:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:31:00.237 16:00:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:31:00.237 16:00:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:31:00.237 16:00:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:31:00.237 16:00:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:31:00.237 16:00:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:31:00.237 16:00:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=78588 00:31:00.237 16:00:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 78588 00:31:00.237 16:00:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 78588 ']' 00:31:00.237 16:00:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:00.237 16:00:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:00.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:00.237 16:00:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:00.237 16:00:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:00.237 16:00:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:00.237 16:00:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:31:00.496 [2024-11-05 16:00:32.680365] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:31:00.496 [2024-11-05 16:00:32.680490] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78588 ] 00:31:00.496 [2024-11-05 16:00:32.840935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:00.754 [2024-11-05 16:00:32.937411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:00.754 [2024-11-05 16:00:33.073424] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:00.754 [2024-11-05 16:00:33.073460] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.321 malloc1 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.321 [2024-11-05 16:00:33.564115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:01.321 [2024-11-05 16:00:33.564176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:01.321 [2024-11-05 16:00:33.564196] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:31:01.321 [2024-11-05 16:00:33.564206] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:01.321 [2024-11-05 16:00:33.566311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:01.321 [2024-11-05 16:00:33.566347] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:01.321 pt1 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.321 malloc2 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.321 [2024-11-05 16:00:33.599620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:01.321 [2024-11-05 16:00:33.599671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:01.321 [2024-11-05 16:00:33.599691] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:31:01.321 [2024-11-05 16:00:33.599700] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:01.321 [2024-11-05 16:00:33.601764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:01.321 [2024-11-05 16:00:33.601796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:01.321 pt2 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.321 malloc3 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.321 [2024-11-05 16:00:33.643559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:01.321 [2024-11-05 16:00:33.643607] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:01.321 [2024-11-05 16:00:33.643627] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:31:01.321 [2024-11-05 16:00:33.643636] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:01.321 [2024-11-05 16:00:33.645668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:01.321 [2024-11-05 16:00:33.645701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:01.321 pt3 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:31:01.321 16:00:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.322 16:00:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.322 [2024-11-05 16:00:33.651598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:01.322 [2024-11-05 16:00:33.653393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:01.322 [2024-11-05 16:00:33.653456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:01.322 [2024-11-05 16:00:33.653608] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:31:01.322 [2024-11-05 16:00:33.653631] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:01.322 [2024-11-05 16:00:33.653874] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:31:01.322 [2024-11-05 16:00:33.657587] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:31:01.322 [2024-11-05 16:00:33.657608] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:31:01.322 [2024-11-05 16:00:33.657775] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:01.322 16:00:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.322 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:01.322 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:01.322 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:01.322 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:01.322 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:01.322 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:01.322 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:01.322 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:01.322 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:01.322 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:01.322 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:01.322 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:01.322 16:00:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.322 16:00:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.322 16:00:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.322 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:01.322 "name": "raid_bdev1", 00:31:01.322 "uuid": "b680e075-6807-47b1-b60f-0687b129e235", 00:31:01.322 "strip_size_kb": 64, 00:31:01.322 "state": "online", 00:31:01.322 "raid_level": "raid5f", 00:31:01.322 "superblock": true, 00:31:01.322 "num_base_bdevs": 3, 00:31:01.322 "num_base_bdevs_discovered": 3, 00:31:01.322 "num_base_bdevs_operational": 3, 00:31:01.322 "base_bdevs_list": [ 00:31:01.322 { 00:31:01.322 "name": "pt1", 00:31:01.322 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:01.322 "is_configured": true, 00:31:01.322 "data_offset": 2048, 00:31:01.322 "data_size": 63488 00:31:01.322 }, 00:31:01.322 { 00:31:01.322 "name": "pt2", 00:31:01.322 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:01.322 "is_configured": true, 00:31:01.322 "data_offset": 2048, 00:31:01.322 "data_size": 63488 00:31:01.322 }, 00:31:01.322 { 00:31:01.322 "name": "pt3", 00:31:01.322 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:01.322 "is_configured": true, 00:31:01.322 "data_offset": 2048, 00:31:01.322 "data_size": 63488 00:31:01.322 } 00:31:01.322 ] 00:31:01.322 }' 00:31:01.322 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:01.322 16:00:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.580 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:31:01.580 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:31:01.580 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:01.580 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:01.580 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:31:01.580 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:01.580 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:01.580 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:01.580 16:00:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.580 16:00:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.580 [2024-11-05 16:00:33.974064] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:01.580 16:00:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.839 16:00:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:01.839 "name": "raid_bdev1", 00:31:01.839 "aliases": [ 00:31:01.839 "b680e075-6807-47b1-b60f-0687b129e235" 00:31:01.839 ], 00:31:01.839 "product_name": "Raid Volume", 00:31:01.839 "block_size": 512, 00:31:01.839 "num_blocks": 126976, 00:31:01.839 "uuid": "b680e075-6807-47b1-b60f-0687b129e235", 00:31:01.839 "assigned_rate_limits": { 00:31:01.839 "rw_ios_per_sec": 0, 00:31:01.839 "rw_mbytes_per_sec": 0, 00:31:01.839 "r_mbytes_per_sec": 0, 00:31:01.839 "w_mbytes_per_sec": 0 00:31:01.839 }, 00:31:01.839 "claimed": false, 00:31:01.839 "zoned": false, 00:31:01.839 "supported_io_types": { 00:31:01.839 "read": true, 00:31:01.839 "write": true, 00:31:01.839 "unmap": false, 00:31:01.839 "flush": false, 00:31:01.839 "reset": true, 00:31:01.839 "nvme_admin": false, 00:31:01.839 "nvme_io": false, 00:31:01.839 "nvme_io_md": false, 00:31:01.839 "write_zeroes": true, 00:31:01.839 "zcopy": false, 00:31:01.839 "get_zone_info": false, 00:31:01.839 "zone_management": false, 00:31:01.839 "zone_append": false, 00:31:01.839 "compare": false, 00:31:01.839 "compare_and_write": false, 00:31:01.839 "abort": false, 00:31:01.839 "seek_hole": false, 00:31:01.839 "seek_data": false, 00:31:01.839 "copy": false, 00:31:01.839 "nvme_iov_md": false 00:31:01.839 }, 00:31:01.839 "driver_specific": { 00:31:01.839 "raid": { 00:31:01.839 "uuid": "b680e075-6807-47b1-b60f-0687b129e235", 00:31:01.839 "strip_size_kb": 64, 00:31:01.839 "state": "online", 00:31:01.839 "raid_level": "raid5f", 00:31:01.839 "superblock": true, 00:31:01.839 "num_base_bdevs": 3, 00:31:01.839 "num_base_bdevs_discovered": 3, 00:31:01.839 "num_base_bdevs_operational": 3, 00:31:01.839 "base_bdevs_list": [ 00:31:01.839 { 00:31:01.839 "name": "pt1", 00:31:01.839 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:01.839 "is_configured": true, 00:31:01.839 "data_offset": 2048, 00:31:01.839 "data_size": 63488 00:31:01.839 }, 00:31:01.839 { 00:31:01.839 "name": "pt2", 00:31:01.839 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:01.839 "is_configured": true, 00:31:01.839 "data_offset": 2048, 00:31:01.839 "data_size": 63488 00:31:01.839 }, 00:31:01.839 { 00:31:01.839 "name": "pt3", 00:31:01.839 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:01.839 "is_configured": true, 00:31:01.839 "data_offset": 2048, 00:31:01.839 "data_size": 63488 00:31:01.839 } 00:31:01.839 ] 00:31:01.839 } 00:31:01.839 } 00:31:01.839 }' 00:31:01.839 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:01.839 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:31:01.839 pt2 00:31:01.839 pt3' 00:31:01.839 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:01.839 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:01.839 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:01.839 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:31:01.839 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.839 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.839 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:01.839 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.839 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:01.839 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:01.839 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:01.839 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:31:01.839 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.839 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.839 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:01.839 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.839 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:01.839 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:01.839 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:01.839 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:01.839 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:31:01.839 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.839 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.839 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.839 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:01.839 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:01.839 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:31:01.839 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:01.839 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.840 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.840 [2024-11-05 16:00:34.174055] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:01.840 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.840 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b680e075-6807-47b1-b60f-0687b129e235 00:31:01.840 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b680e075-6807-47b1-b60f-0687b129e235 ']' 00:31:01.840 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:01.840 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.840 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.840 [2024-11-05 16:00:34.209861] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:01.840 [2024-11-05 16:00:34.209888] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:01.840 [2024-11-05 16:00:34.209948] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:01.840 [2024-11-05 16:00:34.210020] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:01.840 [2024-11-05 16:00:34.210029] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:31:01.840 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.840 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:01.840 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.840 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.840 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:31:01.840 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.840 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:31:01.840 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:31:01.840 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:31:01.840 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:31:01.840 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.840 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.840 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.840 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:31:01.840 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:31:01.840 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.840 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:02.098 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.098 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:31:02.098 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:31:02.098 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.098 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:02.098 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.098 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:31:02.098 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.098 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:02.098 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:31:02.098 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:02.099 [2024-11-05 16:00:34.313931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:31:02.099 [2024-11-05 16:00:34.315790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:31:02.099 [2024-11-05 16:00:34.315854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:31:02.099 [2024-11-05 16:00:34.315900] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:31:02.099 [2024-11-05 16:00:34.315943] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:31:02.099 [2024-11-05 16:00:34.315963] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:31:02.099 [2024-11-05 16:00:34.315979] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:02.099 [2024-11-05 16:00:34.315988] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:31:02.099 request: 00:31:02.099 { 00:31:02.099 "name": "raid_bdev1", 00:31:02.099 "raid_level": "raid5f", 00:31:02.099 "base_bdevs": [ 00:31:02.099 "malloc1", 00:31:02.099 "malloc2", 00:31:02.099 "malloc3" 00:31:02.099 ], 00:31:02.099 "strip_size_kb": 64, 00:31:02.099 "superblock": false, 00:31:02.099 "method": "bdev_raid_create", 00:31:02.099 "req_id": 1 00:31:02.099 } 00:31:02.099 Got JSON-RPC error response 00:31:02.099 response: 00:31:02.099 { 00:31:02.099 "code": -17, 00:31:02.099 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:31:02.099 } 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:02.099 [2024-11-05 16:00:34.357894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:02.099 [2024-11-05 16:00:34.357935] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:02.099 [2024-11-05 16:00:34.357951] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:31:02.099 [2024-11-05 16:00:34.357959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:02.099 [2024-11-05 16:00:34.360068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:02.099 [2024-11-05 16:00:34.360100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:02.099 [2024-11-05 16:00:34.360163] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:02.099 [2024-11-05 16:00:34.360201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:02.099 pt1 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:02.099 "name": "raid_bdev1", 00:31:02.099 "uuid": "b680e075-6807-47b1-b60f-0687b129e235", 00:31:02.099 "strip_size_kb": 64, 00:31:02.099 "state": "configuring", 00:31:02.099 "raid_level": "raid5f", 00:31:02.099 "superblock": true, 00:31:02.099 "num_base_bdevs": 3, 00:31:02.099 "num_base_bdevs_discovered": 1, 00:31:02.099 "num_base_bdevs_operational": 3, 00:31:02.099 "base_bdevs_list": [ 00:31:02.099 { 00:31:02.099 "name": "pt1", 00:31:02.099 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:02.099 "is_configured": true, 00:31:02.099 "data_offset": 2048, 00:31:02.099 "data_size": 63488 00:31:02.099 }, 00:31:02.099 { 00:31:02.099 "name": null, 00:31:02.099 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:02.099 "is_configured": false, 00:31:02.099 "data_offset": 2048, 00:31:02.099 "data_size": 63488 00:31:02.099 }, 00:31:02.099 { 00:31:02.099 "name": null, 00:31:02.099 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:02.099 "is_configured": false, 00:31:02.099 "data_offset": 2048, 00:31:02.099 "data_size": 63488 00:31:02.099 } 00:31:02.099 ] 00:31:02.099 }' 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:02.099 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:02.357 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:31:02.357 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:02.357 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.357 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:02.357 [2024-11-05 16:00:34.661970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:02.357 [2024-11-05 16:00:34.662017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:02.357 [2024-11-05 16:00:34.662033] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:31:02.357 [2024-11-05 16:00:34.662041] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:02.357 [2024-11-05 16:00:34.662399] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:02.357 [2024-11-05 16:00:34.662426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:02.357 [2024-11-05 16:00:34.662487] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:02.357 [2024-11-05 16:00:34.662504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:02.357 pt2 00:31:02.357 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.357 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:31:02.357 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.357 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:02.357 [2024-11-05 16:00:34.669986] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:31:02.357 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.357 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:31:02.357 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:02.357 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:02.357 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:02.357 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:02.357 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:02.357 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:02.357 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:02.357 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:02.357 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:02.357 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:02.357 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.357 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:02.357 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:02.357 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.357 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:02.357 "name": "raid_bdev1", 00:31:02.357 "uuid": "b680e075-6807-47b1-b60f-0687b129e235", 00:31:02.357 "strip_size_kb": 64, 00:31:02.357 "state": "configuring", 00:31:02.357 "raid_level": "raid5f", 00:31:02.357 "superblock": true, 00:31:02.357 "num_base_bdevs": 3, 00:31:02.357 "num_base_bdevs_discovered": 1, 00:31:02.357 "num_base_bdevs_operational": 3, 00:31:02.357 "base_bdevs_list": [ 00:31:02.357 { 00:31:02.357 "name": "pt1", 00:31:02.357 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:02.357 "is_configured": true, 00:31:02.357 "data_offset": 2048, 00:31:02.357 "data_size": 63488 00:31:02.357 }, 00:31:02.357 { 00:31:02.357 "name": null, 00:31:02.357 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:02.357 "is_configured": false, 00:31:02.357 "data_offset": 0, 00:31:02.357 "data_size": 63488 00:31:02.357 }, 00:31:02.357 { 00:31:02.357 "name": null, 00:31:02.357 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:02.357 "is_configured": false, 00:31:02.357 "data_offset": 2048, 00:31:02.357 "data_size": 63488 00:31:02.357 } 00:31:02.357 ] 00:31:02.357 }' 00:31:02.357 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:02.357 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:02.615 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:31:02.615 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:31:02.615 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:02.615 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.615 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:02.615 [2024-11-05 16:00:34.994068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:02.615 [2024-11-05 16:00:34.994130] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:02.615 [2024-11-05 16:00:34.994146] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:31:02.615 [2024-11-05 16:00:34.994157] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:02.615 [2024-11-05 16:00:34.994571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:02.615 [2024-11-05 16:00:34.994598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:02.615 [2024-11-05 16:00:34.994667] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:02.615 [2024-11-05 16:00:34.994688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:02.615 pt2 00:31:02.615 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.615 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:31:02.615 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:31:02.615 16:00:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:02.615 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.615 16:00:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:02.615 [2024-11-05 16:00:35.002046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:02.615 [2024-11-05 16:00:35.002086] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:02.615 [2024-11-05 16:00:35.002097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:31:02.615 [2024-11-05 16:00:35.002107] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:02.615 [2024-11-05 16:00:35.002447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:02.615 [2024-11-05 16:00:35.002475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:02.615 [2024-11-05 16:00:35.002527] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:31:02.615 [2024-11-05 16:00:35.002544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:02.615 [2024-11-05 16:00:35.002663] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:31:02.615 [2024-11-05 16:00:35.002680] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:02.615 [2024-11-05 16:00:35.002918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:31:02.615 [2024-11-05 16:00:35.006327] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:31:02.615 [2024-11-05 16:00:35.006347] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:31:02.615 [2024-11-05 16:00:35.006502] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:02.615 pt3 00:31:02.615 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.615 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:31:02.615 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:31:02.615 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:02.615 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:02.615 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:02.615 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:02.615 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:02.615 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:02.615 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:02.615 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:02.616 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:02.616 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:02.616 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:02.616 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:02.616 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.616 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:02.616 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.873 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:02.873 "name": "raid_bdev1", 00:31:02.873 "uuid": "b680e075-6807-47b1-b60f-0687b129e235", 00:31:02.873 "strip_size_kb": 64, 00:31:02.873 "state": "online", 00:31:02.873 "raid_level": "raid5f", 00:31:02.873 "superblock": true, 00:31:02.873 "num_base_bdevs": 3, 00:31:02.873 "num_base_bdevs_discovered": 3, 00:31:02.873 "num_base_bdevs_operational": 3, 00:31:02.873 "base_bdevs_list": [ 00:31:02.873 { 00:31:02.873 "name": "pt1", 00:31:02.873 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:02.873 "is_configured": true, 00:31:02.873 "data_offset": 2048, 00:31:02.873 "data_size": 63488 00:31:02.873 }, 00:31:02.873 { 00:31:02.873 "name": "pt2", 00:31:02.873 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:02.873 "is_configured": true, 00:31:02.873 "data_offset": 2048, 00:31:02.873 "data_size": 63488 00:31:02.873 }, 00:31:02.873 { 00:31:02.873 "name": "pt3", 00:31:02.873 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:02.873 "is_configured": true, 00:31:02.873 "data_offset": 2048, 00:31:02.873 "data_size": 63488 00:31:02.873 } 00:31:02.873 ] 00:31:02.873 }' 00:31:02.873 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:02.873 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:03.131 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:03.132 [2024-11-05 16:00:35.314749] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:03.132 "name": "raid_bdev1", 00:31:03.132 "aliases": [ 00:31:03.132 "b680e075-6807-47b1-b60f-0687b129e235" 00:31:03.132 ], 00:31:03.132 "product_name": "Raid Volume", 00:31:03.132 "block_size": 512, 00:31:03.132 "num_blocks": 126976, 00:31:03.132 "uuid": "b680e075-6807-47b1-b60f-0687b129e235", 00:31:03.132 "assigned_rate_limits": { 00:31:03.132 "rw_ios_per_sec": 0, 00:31:03.132 "rw_mbytes_per_sec": 0, 00:31:03.132 "r_mbytes_per_sec": 0, 00:31:03.132 "w_mbytes_per_sec": 0 00:31:03.132 }, 00:31:03.132 "claimed": false, 00:31:03.132 "zoned": false, 00:31:03.132 "supported_io_types": { 00:31:03.132 "read": true, 00:31:03.132 "write": true, 00:31:03.132 "unmap": false, 00:31:03.132 "flush": false, 00:31:03.132 "reset": true, 00:31:03.132 "nvme_admin": false, 00:31:03.132 "nvme_io": false, 00:31:03.132 "nvme_io_md": false, 00:31:03.132 "write_zeroes": true, 00:31:03.132 "zcopy": false, 00:31:03.132 "get_zone_info": false, 00:31:03.132 "zone_management": false, 00:31:03.132 "zone_append": false, 00:31:03.132 "compare": false, 00:31:03.132 "compare_and_write": false, 00:31:03.132 "abort": false, 00:31:03.132 "seek_hole": false, 00:31:03.132 "seek_data": false, 00:31:03.132 "copy": false, 00:31:03.132 "nvme_iov_md": false 00:31:03.132 }, 00:31:03.132 "driver_specific": { 00:31:03.132 "raid": { 00:31:03.132 "uuid": "b680e075-6807-47b1-b60f-0687b129e235", 00:31:03.132 "strip_size_kb": 64, 00:31:03.132 "state": "online", 00:31:03.132 "raid_level": "raid5f", 00:31:03.132 "superblock": true, 00:31:03.132 "num_base_bdevs": 3, 00:31:03.132 "num_base_bdevs_discovered": 3, 00:31:03.132 "num_base_bdevs_operational": 3, 00:31:03.132 "base_bdevs_list": [ 00:31:03.132 { 00:31:03.132 "name": "pt1", 00:31:03.132 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:03.132 "is_configured": true, 00:31:03.132 "data_offset": 2048, 00:31:03.132 "data_size": 63488 00:31:03.132 }, 00:31:03.132 { 00:31:03.132 "name": "pt2", 00:31:03.132 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:03.132 "is_configured": true, 00:31:03.132 "data_offset": 2048, 00:31:03.132 "data_size": 63488 00:31:03.132 }, 00:31:03.132 { 00:31:03.132 "name": "pt3", 00:31:03.132 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:03.132 "is_configured": true, 00:31:03.132 "data_offset": 2048, 00:31:03.132 "data_size": 63488 00:31:03.132 } 00:31:03.132 ] 00:31:03.132 } 00:31:03.132 } 00:31:03.132 }' 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:31:03.132 pt2 00:31:03.132 pt3' 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:31:03.132 [2024-11-05 16:00:35.506751] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b680e075-6807-47b1-b60f-0687b129e235 '!=' b680e075-6807-47b1-b60f-0687b129e235 ']' 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:03.132 [2024-11-05 16:00:35.542615] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:03.132 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:03.133 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:03.133 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:03.133 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:03.133 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:03.133 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:03.133 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:03.133 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:03.391 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:03.391 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:03.391 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:03.391 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.391 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:03.391 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.391 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:03.391 "name": "raid_bdev1", 00:31:03.391 "uuid": "b680e075-6807-47b1-b60f-0687b129e235", 00:31:03.391 "strip_size_kb": 64, 00:31:03.391 "state": "online", 00:31:03.391 "raid_level": "raid5f", 00:31:03.391 "superblock": true, 00:31:03.391 "num_base_bdevs": 3, 00:31:03.391 "num_base_bdevs_discovered": 2, 00:31:03.391 "num_base_bdevs_operational": 2, 00:31:03.391 "base_bdevs_list": [ 00:31:03.391 { 00:31:03.391 "name": null, 00:31:03.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:03.391 "is_configured": false, 00:31:03.391 "data_offset": 0, 00:31:03.391 "data_size": 63488 00:31:03.391 }, 00:31:03.391 { 00:31:03.391 "name": "pt2", 00:31:03.391 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:03.391 "is_configured": true, 00:31:03.391 "data_offset": 2048, 00:31:03.391 "data_size": 63488 00:31:03.391 }, 00:31:03.391 { 00:31:03.391 "name": "pt3", 00:31:03.391 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:03.391 "is_configured": true, 00:31:03.391 "data_offset": 2048, 00:31:03.391 "data_size": 63488 00:31:03.391 } 00:31:03.391 ] 00:31:03.391 }' 00:31:03.391 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:03.391 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:03.649 [2024-11-05 16:00:35.854636] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:03.649 [2024-11-05 16:00:35.854660] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:03.649 [2024-11-05 16:00:35.854714] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:03.649 [2024-11-05 16:00:35.854758] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:03.649 [2024-11-05 16:00:35.854769] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:03.649 [2024-11-05 16:00:35.910626] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:03.649 [2024-11-05 16:00:35.910670] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:03.649 [2024-11-05 16:00:35.910681] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:31:03.649 [2024-11-05 16:00:35.910690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:03.649 [2024-11-05 16:00:35.912452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:03.649 [2024-11-05 16:00:35.912485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:03.649 [2024-11-05 16:00:35.912542] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:03.649 [2024-11-05 16:00:35.912575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:03.649 pt2 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:03.649 "name": "raid_bdev1", 00:31:03.649 "uuid": "b680e075-6807-47b1-b60f-0687b129e235", 00:31:03.649 "strip_size_kb": 64, 00:31:03.649 "state": "configuring", 00:31:03.649 "raid_level": "raid5f", 00:31:03.649 "superblock": true, 00:31:03.649 "num_base_bdevs": 3, 00:31:03.649 "num_base_bdevs_discovered": 1, 00:31:03.649 "num_base_bdevs_operational": 2, 00:31:03.649 "base_bdevs_list": [ 00:31:03.649 { 00:31:03.649 "name": null, 00:31:03.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:03.649 "is_configured": false, 00:31:03.649 "data_offset": 2048, 00:31:03.649 "data_size": 63488 00:31:03.649 }, 00:31:03.649 { 00:31:03.649 "name": "pt2", 00:31:03.649 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:03.649 "is_configured": true, 00:31:03.649 "data_offset": 2048, 00:31:03.649 "data_size": 63488 00:31:03.649 }, 00:31:03.649 { 00:31:03.649 "name": null, 00:31:03.649 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:03.649 "is_configured": false, 00:31:03.649 "data_offset": 2048, 00:31:03.649 "data_size": 63488 00:31:03.649 } 00:31:03.649 ] 00:31:03.649 }' 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:03.649 16:00:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:03.907 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:31:03.907 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:31:03.907 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:31:03.907 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:03.907 16:00:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.907 16:00:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:03.907 [2024-11-05 16:00:36.214710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:03.907 [2024-11-05 16:00:36.214758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:03.907 [2024-11-05 16:00:36.214776] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:31:03.907 [2024-11-05 16:00:36.214784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:03.907 [2024-11-05 16:00:36.215140] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:03.907 [2024-11-05 16:00:36.215163] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:03.907 [2024-11-05 16:00:36.215220] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:31:03.907 [2024-11-05 16:00:36.215242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:03.907 [2024-11-05 16:00:36.215322] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:31:03.907 [2024-11-05 16:00:36.215336] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:03.907 [2024-11-05 16:00:36.215514] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:31:03.907 [2024-11-05 16:00:36.218283] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:31:03.907 [2024-11-05 16:00:36.218299] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:31:03.907 [2024-11-05 16:00:36.218461] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:03.907 pt3 00:31:03.907 16:00:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.907 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:03.907 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:03.907 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:03.907 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:03.907 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:03.907 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:03.907 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:03.907 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:03.907 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:03.907 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:03.907 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:03.907 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:03.907 16:00:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.907 16:00:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:03.907 16:00:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.907 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:03.907 "name": "raid_bdev1", 00:31:03.907 "uuid": "b680e075-6807-47b1-b60f-0687b129e235", 00:31:03.907 "strip_size_kb": 64, 00:31:03.907 "state": "online", 00:31:03.907 "raid_level": "raid5f", 00:31:03.907 "superblock": true, 00:31:03.907 "num_base_bdevs": 3, 00:31:03.907 "num_base_bdevs_discovered": 2, 00:31:03.907 "num_base_bdevs_operational": 2, 00:31:03.907 "base_bdevs_list": [ 00:31:03.907 { 00:31:03.907 "name": null, 00:31:03.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:03.908 "is_configured": false, 00:31:03.908 "data_offset": 2048, 00:31:03.908 "data_size": 63488 00:31:03.908 }, 00:31:03.908 { 00:31:03.908 "name": "pt2", 00:31:03.908 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:03.908 "is_configured": true, 00:31:03.908 "data_offset": 2048, 00:31:03.908 "data_size": 63488 00:31:03.908 }, 00:31:03.908 { 00:31:03.908 "name": "pt3", 00:31:03.908 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:03.908 "is_configured": true, 00:31:03.908 "data_offset": 2048, 00:31:03.908 "data_size": 63488 00:31:03.908 } 00:31:03.908 ] 00:31:03.908 }' 00:31:03.908 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:03.908 16:00:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:04.166 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:04.166 16:00:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.166 16:00:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:04.166 [2024-11-05 16:00:36.513585] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:04.166 [2024-11-05 16:00:36.513610] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:04.166 [2024-11-05 16:00:36.513663] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:04.166 [2024-11-05 16:00:36.513710] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:04.166 [2024-11-05 16:00:36.513717] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:31:04.166 16:00:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.166 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:04.166 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:31:04.166 16:00:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.166 16:00:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:04.166 16:00:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.166 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:31:04.166 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:31:04.166 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:31:04.166 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:31:04.166 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:31:04.166 16:00:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.166 16:00:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:04.166 16:00:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.166 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:04.166 16:00:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.166 16:00:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:04.166 [2024-11-05 16:00:36.565605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:04.166 [2024-11-05 16:00:36.565648] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:04.166 [2024-11-05 16:00:36.565661] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:31:04.166 [2024-11-05 16:00:36.565667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:04.166 [2024-11-05 16:00:36.567436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:04.166 [2024-11-05 16:00:36.567467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:04.166 [2024-11-05 16:00:36.567522] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:04.166 [2024-11-05 16:00:36.567552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:04.166 [2024-11-05 16:00:36.567645] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:31:04.166 [2024-11-05 16:00:36.567659] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:04.166 [2024-11-05 16:00:36.567672] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:31:04.166 [2024-11-05 16:00:36.567710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:04.166 pt1 00:31:04.166 16:00:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.166 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:31:04.166 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:31:04.166 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:04.167 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:04.167 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:04.167 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:04.167 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:04.167 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:04.167 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:04.167 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:04.167 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:04.167 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:04.167 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:04.167 16:00:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.167 16:00:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:04.424 16:00:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.424 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:04.424 "name": "raid_bdev1", 00:31:04.424 "uuid": "b680e075-6807-47b1-b60f-0687b129e235", 00:31:04.424 "strip_size_kb": 64, 00:31:04.425 "state": "configuring", 00:31:04.425 "raid_level": "raid5f", 00:31:04.425 "superblock": true, 00:31:04.425 "num_base_bdevs": 3, 00:31:04.425 "num_base_bdevs_discovered": 1, 00:31:04.425 "num_base_bdevs_operational": 2, 00:31:04.425 "base_bdevs_list": [ 00:31:04.425 { 00:31:04.425 "name": null, 00:31:04.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:04.425 "is_configured": false, 00:31:04.425 "data_offset": 2048, 00:31:04.425 "data_size": 63488 00:31:04.425 }, 00:31:04.425 { 00:31:04.425 "name": "pt2", 00:31:04.425 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:04.425 "is_configured": true, 00:31:04.425 "data_offset": 2048, 00:31:04.425 "data_size": 63488 00:31:04.425 }, 00:31:04.425 { 00:31:04.425 "name": null, 00:31:04.425 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:04.425 "is_configured": false, 00:31:04.425 "data_offset": 2048, 00:31:04.425 "data_size": 63488 00:31:04.425 } 00:31:04.425 ] 00:31:04.425 }' 00:31:04.425 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:04.425 16:00:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:04.683 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:31:04.683 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:31:04.683 16:00:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.683 16:00:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:04.683 16:00:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.683 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:31:04.683 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:04.683 16:00:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.683 16:00:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:04.683 [2024-11-05 16:00:36.921684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:04.683 [2024-11-05 16:00:36.921733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:04.683 [2024-11-05 16:00:36.921749] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:31:04.683 [2024-11-05 16:00:36.921756] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:04.683 [2024-11-05 16:00:36.922121] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:04.683 [2024-11-05 16:00:36.922138] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:04.683 [2024-11-05 16:00:36.922199] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:31:04.683 [2024-11-05 16:00:36.922215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:04.683 [2024-11-05 16:00:36.922300] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:31:04.683 [2024-11-05 16:00:36.922313] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:04.683 [2024-11-05 16:00:36.922508] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:31:04.683 [2024-11-05 16:00:36.925363] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:31:04.683 [2024-11-05 16:00:36.925383] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:31:04.683 [2024-11-05 16:00:36.925532] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:04.683 pt3 00:31:04.683 16:00:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.683 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:04.683 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:04.683 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:04.683 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:04.683 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:04.683 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:04.683 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:04.683 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:04.683 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:04.683 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:04.683 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:04.684 16:00:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.684 16:00:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:04.684 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:04.684 16:00:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.684 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:04.684 "name": "raid_bdev1", 00:31:04.684 "uuid": "b680e075-6807-47b1-b60f-0687b129e235", 00:31:04.684 "strip_size_kb": 64, 00:31:04.684 "state": "online", 00:31:04.684 "raid_level": "raid5f", 00:31:04.684 "superblock": true, 00:31:04.684 "num_base_bdevs": 3, 00:31:04.684 "num_base_bdevs_discovered": 2, 00:31:04.684 "num_base_bdevs_operational": 2, 00:31:04.684 "base_bdevs_list": [ 00:31:04.684 { 00:31:04.684 "name": null, 00:31:04.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:04.684 "is_configured": false, 00:31:04.684 "data_offset": 2048, 00:31:04.684 "data_size": 63488 00:31:04.684 }, 00:31:04.684 { 00:31:04.684 "name": "pt2", 00:31:04.684 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:04.684 "is_configured": true, 00:31:04.684 "data_offset": 2048, 00:31:04.684 "data_size": 63488 00:31:04.684 }, 00:31:04.684 { 00:31:04.684 "name": "pt3", 00:31:04.684 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:04.684 "is_configured": true, 00:31:04.684 "data_offset": 2048, 00:31:04.684 "data_size": 63488 00:31:04.684 } 00:31:04.684 ] 00:31:04.684 }' 00:31:04.684 16:00:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:04.684 16:00:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:04.943 16:00:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:31:04.943 16:00:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:31:04.943 16:00:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.943 16:00:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:04.943 16:00:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.943 16:00:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:31:04.943 16:00:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:04.943 16:00:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.943 16:00:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:04.943 16:00:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:31:04.943 [2024-11-05 16:00:37.268950] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:04.943 16:00:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.943 16:00:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' b680e075-6807-47b1-b60f-0687b129e235 '!=' b680e075-6807-47b1-b60f-0687b129e235 ']' 00:31:04.943 16:00:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 78588 00:31:04.943 16:00:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 78588 ']' 00:31:04.943 16:00:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 78588 00:31:04.943 16:00:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:31:04.943 16:00:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:04.943 16:00:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78588 00:31:04.943 16:00:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:04.943 killing process with pid 78588 00:31:04.943 16:00:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:04.943 16:00:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78588' 00:31:04.943 16:00:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 78588 00:31:04.943 [2024-11-05 16:00:37.313875] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:04.943 [2024-11-05 16:00:37.313940] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:04.943 16:00:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 78588 00:31:04.943 [2024-11-05 16:00:37.313985] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:04.943 [2024-11-05 16:00:37.313994] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:31:05.201 [2024-11-05 16:00:37.457023] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:05.767 16:00:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:31:05.767 00:31:05.767 real 0m5.394s 00:31:05.767 user 0m8.572s 00:31:05.767 sys 0m0.868s 00:31:05.767 16:00:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:05.767 ************************************ 00:31:05.767 END TEST raid5f_superblock_test 00:31:05.767 ************************************ 00:31:05.767 16:00:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:05.767 16:00:38 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:31:05.767 16:00:38 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:31:05.767 16:00:38 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:31:05.767 16:00:38 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:05.767 16:00:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:05.767 ************************************ 00:31:05.767 START TEST raid5f_rebuild_test 00:31:05.767 ************************************ 00:31:05.767 16:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 false false true 00:31:05.768 16:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:31:05.768 16:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:31:05.768 16:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:31:05.768 16:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:31:05.768 16:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:31:05.768 16:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:31:05.768 16:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:05.768 16:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:31:05.768 16:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:31:05.768 16:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:05.768 16:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:31:05.768 16:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:31:05.768 16:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:05.768 16:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:31:05.768 16:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:31:05.768 16:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:05.768 16:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:31:05.768 16:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:31:05.768 16:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:31:05.768 16:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:31:05.768 16:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:31:05.768 16:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:31:05.768 16:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:31:05.768 16:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:31:05.768 16:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:31:05.768 16:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:31:05.768 16:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:31:05.768 16:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:31:05.768 16:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=79004 00:31:05.768 16:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 79004 00:31:05.768 16:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 79004 ']' 00:31:05.768 16:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:05.768 16:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:05.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:05.768 16:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:05.768 16:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:05.768 16:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:05.768 16:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:05.768 [2024-11-05 16:00:38.122340] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:31:05.768 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:05.768 Zero copy mechanism will not be used. 00:31:05.768 [2024-11-05 16:00:38.122456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79004 ] 00:31:06.026 [2024-11-05 16:00:38.276963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.026 [2024-11-05 16:00:38.354692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:06.284 [2024-11-05 16:00:38.461084] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:06.284 [2024-11-05 16:00:38.461129] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:06.851 16:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:06.851 16:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:31:06.851 16:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:31:06.851 16:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:31:06.851 16:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.851 16:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:06.851 BaseBdev1_malloc 00:31:06.851 16:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.851 16:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:06.851 16:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.851 16:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:06.851 [2024-11-05 16:00:38.988309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:06.851 [2024-11-05 16:00:38.988364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:06.851 [2024-11-05 16:00:38.988382] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:31:06.851 [2024-11-05 16:00:38.988391] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:06.851 [2024-11-05 16:00:38.990071] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:06.851 [2024-11-05 16:00:38.990103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:06.851 BaseBdev1 00:31:06.851 16:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.851 16:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:31:06.851 16:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:31:06.851 16:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.851 16:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:06.851 BaseBdev2_malloc 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:06.851 [2024-11-05 16:00:39.019018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:31:06.851 [2024-11-05 16:00:39.019061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:06.851 [2024-11-05 16:00:39.019073] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:31:06.851 [2024-11-05 16:00:39.019083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:06.851 [2024-11-05 16:00:39.020673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:06.851 [2024-11-05 16:00:39.020704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:06.851 BaseBdev2 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:06.851 BaseBdev3_malloc 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:06.851 [2024-11-05 16:00:39.063246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:31:06.851 [2024-11-05 16:00:39.063288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:06.851 [2024-11-05 16:00:39.063304] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:31:06.851 [2024-11-05 16:00:39.063313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:06.851 [2024-11-05 16:00:39.064950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:06.851 [2024-11-05 16:00:39.064980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:31:06.851 BaseBdev3 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:06.851 spare_malloc 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:06.851 spare_delay 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:06.851 [2024-11-05 16:00:39.101811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:06.851 [2024-11-05 16:00:39.101857] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:06.851 [2024-11-05 16:00:39.101869] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:31:06.851 [2024-11-05 16:00:39.101877] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:06.851 [2024-11-05 16:00:39.103539] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:06.851 [2024-11-05 16:00:39.103570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:06.851 spare 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:06.851 [2024-11-05 16:00:39.109880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:06.851 [2024-11-05 16:00:39.111304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:06.851 [2024-11-05 16:00:39.111358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:06.851 [2024-11-05 16:00:39.111421] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:31:06.851 [2024-11-05 16:00:39.111434] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:31:06.851 [2024-11-05 16:00:39.111643] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:31:06.851 [2024-11-05 16:00:39.114644] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:31:06.851 [2024-11-05 16:00:39.114662] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:31:06.851 [2024-11-05 16:00:39.114801] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:06.851 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.852 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:06.852 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:06.852 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.852 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:06.852 "name": "raid_bdev1", 00:31:06.852 "uuid": "fc0334d9-bd4b-48fb-b981-26c91019afff", 00:31:06.852 "strip_size_kb": 64, 00:31:06.852 "state": "online", 00:31:06.852 "raid_level": "raid5f", 00:31:06.852 "superblock": false, 00:31:06.852 "num_base_bdevs": 3, 00:31:06.852 "num_base_bdevs_discovered": 3, 00:31:06.852 "num_base_bdevs_operational": 3, 00:31:06.852 "base_bdevs_list": [ 00:31:06.852 { 00:31:06.852 "name": "BaseBdev1", 00:31:06.852 "uuid": "e03cf086-bff0-537c-9be9-ec36a6ca5a6a", 00:31:06.852 "is_configured": true, 00:31:06.852 "data_offset": 0, 00:31:06.852 "data_size": 65536 00:31:06.852 }, 00:31:06.852 { 00:31:06.852 "name": "BaseBdev2", 00:31:06.852 "uuid": "bf6f2846-0eae-508f-89dc-ff1bdec1f5bd", 00:31:06.852 "is_configured": true, 00:31:06.852 "data_offset": 0, 00:31:06.852 "data_size": 65536 00:31:06.852 }, 00:31:06.852 { 00:31:06.852 "name": "BaseBdev3", 00:31:06.852 "uuid": "ae3ded08-5e42-5a30-9c27-690f171de7cc", 00:31:06.852 "is_configured": true, 00:31:06.852 "data_offset": 0, 00:31:06.852 "data_size": 65536 00:31:06.852 } 00:31:06.852 ] 00:31:06.852 }' 00:31:06.852 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:06.852 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:07.110 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:07.110 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.110 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:07.110 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:31:07.110 [2024-11-05 16:00:39.435033] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:07.110 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.110 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:31:07.110 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:07.110 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:31:07.110 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.110 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:07.110 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.110 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:31:07.110 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:31:07.110 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:31:07.110 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:31:07.110 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:31:07.110 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:31:07.110 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:31:07.110 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:07.110 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:07.110 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:07.110 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:31:07.110 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:07.110 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:07.110 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:31:07.379 [2024-11-05 16:00:39.682947] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:31:07.379 /dev/nbd0 00:31:07.379 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:07.379 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:07.379 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:31:07.379 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:31:07.379 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:31:07.379 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:31:07.379 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:31:07.379 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:31:07.379 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:31:07.379 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:31:07.379 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:07.379 1+0 records in 00:31:07.379 1+0 records out 00:31:07.379 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284353 s, 14.4 MB/s 00:31:07.379 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:07.379 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:31:07.379 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:07.379 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:31:07.379 16:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:31:07.379 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:07.379 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:07.379 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:31:07.379 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:31:07.379 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:31:07.379 16:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:31:07.639 512+0 records in 00:31:07.639 512+0 records out 00:31:07.639 67108864 bytes (67 MB, 64 MiB) copied, 0.288365 s, 233 MB/s 00:31:07.639 16:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:31:07.639 16:00:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:31:07.639 16:00:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:07.639 16:00:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:07.639 16:00:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:31:07.639 16:00:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:07.639 16:00:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:31:07.897 [2024-11-05 16:00:40.222439] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:07.897 16:00:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:07.897 16:00:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:07.897 16:00:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:07.897 16:00:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:07.897 16:00:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:07.897 16:00:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:07.897 16:00:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:31:07.897 16:00:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:31:07.897 16:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:31:07.897 16:00:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.897 16:00:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:07.897 [2024-11-05 16:00:40.250490] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:07.897 16:00:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.897 16:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:07.897 16:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:07.897 16:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:07.897 16:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:07.897 16:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:07.897 16:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:07.897 16:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:07.897 16:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:07.897 16:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:07.897 16:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:07.897 16:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:07.897 16:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:07.897 16:00:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.897 16:00:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:07.897 16:00:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.897 16:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:07.897 "name": "raid_bdev1", 00:31:07.897 "uuid": "fc0334d9-bd4b-48fb-b981-26c91019afff", 00:31:07.897 "strip_size_kb": 64, 00:31:07.897 "state": "online", 00:31:07.897 "raid_level": "raid5f", 00:31:07.897 "superblock": false, 00:31:07.897 "num_base_bdevs": 3, 00:31:07.897 "num_base_bdevs_discovered": 2, 00:31:07.897 "num_base_bdevs_operational": 2, 00:31:07.897 "base_bdevs_list": [ 00:31:07.897 { 00:31:07.897 "name": null, 00:31:07.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:07.897 "is_configured": false, 00:31:07.897 "data_offset": 0, 00:31:07.897 "data_size": 65536 00:31:07.897 }, 00:31:07.897 { 00:31:07.897 "name": "BaseBdev2", 00:31:07.897 "uuid": "bf6f2846-0eae-508f-89dc-ff1bdec1f5bd", 00:31:07.897 "is_configured": true, 00:31:07.897 "data_offset": 0, 00:31:07.897 "data_size": 65536 00:31:07.897 }, 00:31:07.897 { 00:31:07.897 "name": "BaseBdev3", 00:31:07.897 "uuid": "ae3ded08-5e42-5a30-9c27-690f171de7cc", 00:31:07.897 "is_configured": true, 00:31:07.897 "data_offset": 0, 00:31:07.897 "data_size": 65536 00:31:07.897 } 00:31:07.897 ] 00:31:07.897 }' 00:31:07.897 16:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:07.897 16:00:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:08.155 16:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:31:08.155 16:00:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.155 16:00:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:08.155 [2024-11-05 16:00:40.566573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:08.413 [2024-11-05 16:00:40.575061] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:31:08.413 16:00:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.413 16:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:31:08.413 [2024-11-05 16:00:40.579269] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:09.346 16:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:09.346 16:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:09.346 16:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:09.346 16:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:09.346 16:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:09.346 16:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:09.346 16:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:09.346 16:00:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.346 16:00:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:09.346 16:00:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.346 16:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:09.346 "name": "raid_bdev1", 00:31:09.346 "uuid": "fc0334d9-bd4b-48fb-b981-26c91019afff", 00:31:09.346 "strip_size_kb": 64, 00:31:09.346 "state": "online", 00:31:09.346 "raid_level": "raid5f", 00:31:09.346 "superblock": false, 00:31:09.346 "num_base_bdevs": 3, 00:31:09.346 "num_base_bdevs_discovered": 3, 00:31:09.346 "num_base_bdevs_operational": 3, 00:31:09.346 "process": { 00:31:09.347 "type": "rebuild", 00:31:09.347 "target": "spare", 00:31:09.347 "progress": { 00:31:09.347 "blocks": 20480, 00:31:09.347 "percent": 15 00:31:09.347 } 00:31:09.347 }, 00:31:09.347 "base_bdevs_list": [ 00:31:09.347 { 00:31:09.347 "name": "spare", 00:31:09.347 "uuid": "890365b2-f849-5e40-acb6-cce975d94318", 00:31:09.347 "is_configured": true, 00:31:09.347 "data_offset": 0, 00:31:09.347 "data_size": 65536 00:31:09.347 }, 00:31:09.347 { 00:31:09.347 "name": "BaseBdev2", 00:31:09.347 "uuid": "bf6f2846-0eae-508f-89dc-ff1bdec1f5bd", 00:31:09.347 "is_configured": true, 00:31:09.347 "data_offset": 0, 00:31:09.347 "data_size": 65536 00:31:09.347 }, 00:31:09.347 { 00:31:09.347 "name": "BaseBdev3", 00:31:09.347 "uuid": "ae3ded08-5e42-5a30-9c27-690f171de7cc", 00:31:09.347 "is_configured": true, 00:31:09.347 "data_offset": 0, 00:31:09.347 "data_size": 65536 00:31:09.347 } 00:31:09.347 ] 00:31:09.347 }' 00:31:09.347 16:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:09.347 16:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:09.347 16:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:09.347 16:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:09.347 16:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:31:09.347 16:00:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.347 16:00:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:09.347 [2024-11-05 16:00:41.667910] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:09.347 [2024-11-05 16:00:41.687317] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:09.347 [2024-11-05 16:00:41.687364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:09.347 [2024-11-05 16:00:41.687378] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:09.347 [2024-11-05 16:00:41.687384] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:09.347 16:00:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.347 16:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:09.347 16:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:09.347 16:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:09.347 16:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:09.347 16:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:09.347 16:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:09.347 16:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:09.347 16:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:09.347 16:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:09.347 16:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:09.347 16:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:09.347 16:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:09.347 16:00:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.347 16:00:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:09.347 16:00:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.347 16:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:09.347 "name": "raid_bdev1", 00:31:09.347 "uuid": "fc0334d9-bd4b-48fb-b981-26c91019afff", 00:31:09.347 "strip_size_kb": 64, 00:31:09.347 "state": "online", 00:31:09.347 "raid_level": "raid5f", 00:31:09.347 "superblock": false, 00:31:09.347 "num_base_bdevs": 3, 00:31:09.347 "num_base_bdevs_discovered": 2, 00:31:09.347 "num_base_bdevs_operational": 2, 00:31:09.347 "base_bdevs_list": [ 00:31:09.347 { 00:31:09.347 "name": null, 00:31:09.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:09.347 "is_configured": false, 00:31:09.347 "data_offset": 0, 00:31:09.347 "data_size": 65536 00:31:09.347 }, 00:31:09.347 { 00:31:09.347 "name": "BaseBdev2", 00:31:09.347 "uuid": "bf6f2846-0eae-508f-89dc-ff1bdec1f5bd", 00:31:09.347 "is_configured": true, 00:31:09.347 "data_offset": 0, 00:31:09.347 "data_size": 65536 00:31:09.347 }, 00:31:09.347 { 00:31:09.347 "name": "BaseBdev3", 00:31:09.347 "uuid": "ae3ded08-5e42-5a30-9c27-690f171de7cc", 00:31:09.347 "is_configured": true, 00:31:09.347 "data_offset": 0, 00:31:09.347 "data_size": 65536 00:31:09.347 } 00:31:09.347 ] 00:31:09.347 }' 00:31:09.347 16:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:09.347 16:00:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:09.605 16:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:09.605 16:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:09.605 16:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:09.605 16:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:09.605 16:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:09.605 16:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:09.605 16:00:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.605 16:00:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:09.605 16:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:09.605 16:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.864 16:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:09.864 "name": "raid_bdev1", 00:31:09.864 "uuid": "fc0334d9-bd4b-48fb-b981-26c91019afff", 00:31:09.864 "strip_size_kb": 64, 00:31:09.864 "state": "online", 00:31:09.864 "raid_level": "raid5f", 00:31:09.864 "superblock": false, 00:31:09.864 "num_base_bdevs": 3, 00:31:09.864 "num_base_bdevs_discovered": 2, 00:31:09.864 "num_base_bdevs_operational": 2, 00:31:09.864 "base_bdevs_list": [ 00:31:09.864 { 00:31:09.864 "name": null, 00:31:09.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:09.864 "is_configured": false, 00:31:09.864 "data_offset": 0, 00:31:09.864 "data_size": 65536 00:31:09.864 }, 00:31:09.864 { 00:31:09.864 "name": "BaseBdev2", 00:31:09.864 "uuid": "bf6f2846-0eae-508f-89dc-ff1bdec1f5bd", 00:31:09.864 "is_configured": true, 00:31:09.864 "data_offset": 0, 00:31:09.864 "data_size": 65536 00:31:09.864 }, 00:31:09.864 { 00:31:09.864 "name": "BaseBdev3", 00:31:09.864 "uuid": "ae3ded08-5e42-5a30-9c27-690f171de7cc", 00:31:09.864 "is_configured": true, 00:31:09.864 "data_offset": 0, 00:31:09.864 "data_size": 65536 00:31:09.864 } 00:31:09.864 ] 00:31:09.864 }' 00:31:09.864 16:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:09.864 16:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:09.864 16:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:09.864 16:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:09.864 16:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:31:09.864 16:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.864 16:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:09.864 [2024-11-05 16:00:42.096888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:09.864 [2024-11-05 16:00:42.104699] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:31:09.864 16:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.864 16:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:31:09.864 [2024-11-05 16:00:42.108810] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:10.798 16:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:10.798 16:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:10.798 16:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:10.798 16:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:10.798 16:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:10.798 16:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:10.798 16:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:10.798 16:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.798 16:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.798 16:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.798 16:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:10.798 "name": "raid_bdev1", 00:31:10.798 "uuid": "fc0334d9-bd4b-48fb-b981-26c91019afff", 00:31:10.798 "strip_size_kb": 64, 00:31:10.798 "state": "online", 00:31:10.798 "raid_level": "raid5f", 00:31:10.798 "superblock": false, 00:31:10.798 "num_base_bdevs": 3, 00:31:10.798 "num_base_bdevs_discovered": 3, 00:31:10.798 "num_base_bdevs_operational": 3, 00:31:10.798 "process": { 00:31:10.798 "type": "rebuild", 00:31:10.798 "target": "spare", 00:31:10.798 "progress": { 00:31:10.798 "blocks": 20480, 00:31:10.798 "percent": 15 00:31:10.798 } 00:31:10.798 }, 00:31:10.798 "base_bdevs_list": [ 00:31:10.798 { 00:31:10.798 "name": "spare", 00:31:10.798 "uuid": "890365b2-f849-5e40-acb6-cce975d94318", 00:31:10.798 "is_configured": true, 00:31:10.798 "data_offset": 0, 00:31:10.798 "data_size": 65536 00:31:10.798 }, 00:31:10.798 { 00:31:10.798 "name": "BaseBdev2", 00:31:10.798 "uuid": "bf6f2846-0eae-508f-89dc-ff1bdec1f5bd", 00:31:10.798 "is_configured": true, 00:31:10.798 "data_offset": 0, 00:31:10.798 "data_size": 65536 00:31:10.798 }, 00:31:10.798 { 00:31:10.798 "name": "BaseBdev3", 00:31:10.798 "uuid": "ae3ded08-5e42-5a30-9c27-690f171de7cc", 00:31:10.798 "is_configured": true, 00:31:10.798 "data_offset": 0, 00:31:10.798 "data_size": 65536 00:31:10.798 } 00:31:10.798 ] 00:31:10.798 }' 00:31:10.798 16:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:10.799 16:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:10.799 16:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:10.799 16:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:10.799 16:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:31:10.799 16:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:31:10.799 16:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:31:10.799 16:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=419 00:31:10.799 16:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:10.799 16:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:10.799 16:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:10.799 16:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:10.799 16:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:10.799 16:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:10.799 16:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:10.799 16:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.799 16:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.799 16:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:11.057 16:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.057 16:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:11.057 "name": "raid_bdev1", 00:31:11.057 "uuid": "fc0334d9-bd4b-48fb-b981-26c91019afff", 00:31:11.057 "strip_size_kb": 64, 00:31:11.057 "state": "online", 00:31:11.057 "raid_level": "raid5f", 00:31:11.057 "superblock": false, 00:31:11.057 "num_base_bdevs": 3, 00:31:11.057 "num_base_bdevs_discovered": 3, 00:31:11.057 "num_base_bdevs_operational": 3, 00:31:11.057 "process": { 00:31:11.057 "type": "rebuild", 00:31:11.057 "target": "spare", 00:31:11.057 "progress": { 00:31:11.057 "blocks": 20480, 00:31:11.057 "percent": 15 00:31:11.057 } 00:31:11.057 }, 00:31:11.057 "base_bdevs_list": [ 00:31:11.057 { 00:31:11.057 "name": "spare", 00:31:11.057 "uuid": "890365b2-f849-5e40-acb6-cce975d94318", 00:31:11.057 "is_configured": true, 00:31:11.057 "data_offset": 0, 00:31:11.057 "data_size": 65536 00:31:11.057 }, 00:31:11.057 { 00:31:11.057 "name": "BaseBdev2", 00:31:11.057 "uuid": "bf6f2846-0eae-508f-89dc-ff1bdec1f5bd", 00:31:11.057 "is_configured": true, 00:31:11.057 "data_offset": 0, 00:31:11.057 "data_size": 65536 00:31:11.057 }, 00:31:11.057 { 00:31:11.057 "name": "BaseBdev3", 00:31:11.057 "uuid": "ae3ded08-5e42-5a30-9c27-690f171de7cc", 00:31:11.057 "is_configured": true, 00:31:11.057 "data_offset": 0, 00:31:11.057 "data_size": 65536 00:31:11.057 } 00:31:11.057 ] 00:31:11.057 }' 00:31:11.057 16:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:11.057 16:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:11.057 16:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:11.057 16:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:11.057 16:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:11.992 16:00:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:11.992 16:00:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:11.992 16:00:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:11.992 16:00:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:11.992 16:00:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:11.992 16:00:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:11.992 16:00:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:11.992 16:00:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:11.992 16:00:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.992 16:00:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:11.992 16:00:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.992 16:00:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:11.992 "name": "raid_bdev1", 00:31:11.992 "uuid": "fc0334d9-bd4b-48fb-b981-26c91019afff", 00:31:11.992 "strip_size_kb": 64, 00:31:11.992 "state": "online", 00:31:11.992 "raid_level": "raid5f", 00:31:11.992 "superblock": false, 00:31:11.992 "num_base_bdevs": 3, 00:31:11.992 "num_base_bdevs_discovered": 3, 00:31:11.992 "num_base_bdevs_operational": 3, 00:31:11.992 "process": { 00:31:11.992 "type": "rebuild", 00:31:11.992 "target": "spare", 00:31:11.992 "progress": { 00:31:11.992 "blocks": 43008, 00:31:11.992 "percent": 32 00:31:11.992 } 00:31:11.992 }, 00:31:11.992 "base_bdevs_list": [ 00:31:11.992 { 00:31:11.992 "name": "spare", 00:31:11.992 "uuid": "890365b2-f849-5e40-acb6-cce975d94318", 00:31:11.992 "is_configured": true, 00:31:11.992 "data_offset": 0, 00:31:11.992 "data_size": 65536 00:31:11.992 }, 00:31:11.992 { 00:31:11.992 "name": "BaseBdev2", 00:31:11.992 "uuid": "bf6f2846-0eae-508f-89dc-ff1bdec1f5bd", 00:31:11.992 "is_configured": true, 00:31:11.992 "data_offset": 0, 00:31:11.992 "data_size": 65536 00:31:11.992 }, 00:31:11.992 { 00:31:11.992 "name": "BaseBdev3", 00:31:11.992 "uuid": "ae3ded08-5e42-5a30-9c27-690f171de7cc", 00:31:11.992 "is_configured": true, 00:31:11.992 "data_offset": 0, 00:31:11.992 "data_size": 65536 00:31:11.992 } 00:31:11.992 ] 00:31:11.992 }' 00:31:11.992 16:00:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:11.992 16:00:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:11.992 16:00:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:11.992 16:00:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:11.992 16:00:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:13.366 16:00:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:13.366 16:00:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:13.366 16:00:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:13.366 16:00:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:13.366 16:00:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:13.366 16:00:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:13.366 16:00:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:13.366 16:00:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.366 16:00:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:13.366 16:00:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:13.366 16:00:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.366 16:00:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:13.366 "name": "raid_bdev1", 00:31:13.366 "uuid": "fc0334d9-bd4b-48fb-b981-26c91019afff", 00:31:13.366 "strip_size_kb": 64, 00:31:13.366 "state": "online", 00:31:13.366 "raid_level": "raid5f", 00:31:13.366 "superblock": false, 00:31:13.366 "num_base_bdevs": 3, 00:31:13.366 "num_base_bdevs_discovered": 3, 00:31:13.366 "num_base_bdevs_operational": 3, 00:31:13.366 "process": { 00:31:13.366 "type": "rebuild", 00:31:13.366 "target": "spare", 00:31:13.366 "progress": { 00:31:13.366 "blocks": 65536, 00:31:13.366 "percent": 50 00:31:13.366 } 00:31:13.366 }, 00:31:13.366 "base_bdevs_list": [ 00:31:13.366 { 00:31:13.366 "name": "spare", 00:31:13.366 "uuid": "890365b2-f849-5e40-acb6-cce975d94318", 00:31:13.366 "is_configured": true, 00:31:13.366 "data_offset": 0, 00:31:13.366 "data_size": 65536 00:31:13.366 }, 00:31:13.366 { 00:31:13.366 "name": "BaseBdev2", 00:31:13.366 "uuid": "bf6f2846-0eae-508f-89dc-ff1bdec1f5bd", 00:31:13.366 "is_configured": true, 00:31:13.366 "data_offset": 0, 00:31:13.366 "data_size": 65536 00:31:13.366 }, 00:31:13.366 { 00:31:13.366 "name": "BaseBdev3", 00:31:13.366 "uuid": "ae3ded08-5e42-5a30-9c27-690f171de7cc", 00:31:13.366 "is_configured": true, 00:31:13.366 "data_offset": 0, 00:31:13.366 "data_size": 65536 00:31:13.366 } 00:31:13.366 ] 00:31:13.366 }' 00:31:13.366 16:00:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:13.366 16:00:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:13.366 16:00:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:13.366 16:00:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:13.366 16:00:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:14.300 16:00:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:14.300 16:00:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:14.300 16:00:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:14.300 16:00:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:14.300 16:00:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:14.300 16:00:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:14.300 16:00:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:14.300 16:00:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.300 16:00:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.300 16:00:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:14.300 16:00:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.300 16:00:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:14.300 "name": "raid_bdev1", 00:31:14.300 "uuid": "fc0334d9-bd4b-48fb-b981-26c91019afff", 00:31:14.300 "strip_size_kb": 64, 00:31:14.300 "state": "online", 00:31:14.300 "raid_level": "raid5f", 00:31:14.300 "superblock": false, 00:31:14.300 "num_base_bdevs": 3, 00:31:14.300 "num_base_bdevs_discovered": 3, 00:31:14.300 "num_base_bdevs_operational": 3, 00:31:14.300 "process": { 00:31:14.300 "type": "rebuild", 00:31:14.300 "target": "spare", 00:31:14.300 "progress": { 00:31:14.300 "blocks": 88064, 00:31:14.300 "percent": 67 00:31:14.300 } 00:31:14.300 }, 00:31:14.300 "base_bdevs_list": [ 00:31:14.300 { 00:31:14.300 "name": "spare", 00:31:14.301 "uuid": "890365b2-f849-5e40-acb6-cce975d94318", 00:31:14.301 "is_configured": true, 00:31:14.301 "data_offset": 0, 00:31:14.301 "data_size": 65536 00:31:14.301 }, 00:31:14.301 { 00:31:14.301 "name": "BaseBdev2", 00:31:14.301 "uuid": "bf6f2846-0eae-508f-89dc-ff1bdec1f5bd", 00:31:14.301 "is_configured": true, 00:31:14.301 "data_offset": 0, 00:31:14.301 "data_size": 65536 00:31:14.301 }, 00:31:14.301 { 00:31:14.301 "name": "BaseBdev3", 00:31:14.301 "uuid": "ae3ded08-5e42-5a30-9c27-690f171de7cc", 00:31:14.301 "is_configured": true, 00:31:14.301 "data_offset": 0, 00:31:14.301 "data_size": 65536 00:31:14.301 } 00:31:14.301 ] 00:31:14.301 }' 00:31:14.301 16:00:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:14.301 16:00:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:14.301 16:00:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:14.301 16:00:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:14.301 16:00:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:15.246 16:00:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:15.246 16:00:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:15.246 16:00:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:15.246 16:00:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:15.246 16:00:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:15.246 16:00:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:15.246 16:00:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:15.246 16:00:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.246 16:00:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:15.246 16:00:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:15.246 16:00:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.504 16:00:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:15.504 "name": "raid_bdev1", 00:31:15.504 "uuid": "fc0334d9-bd4b-48fb-b981-26c91019afff", 00:31:15.504 "strip_size_kb": 64, 00:31:15.504 "state": "online", 00:31:15.504 "raid_level": "raid5f", 00:31:15.504 "superblock": false, 00:31:15.504 "num_base_bdevs": 3, 00:31:15.504 "num_base_bdevs_discovered": 3, 00:31:15.504 "num_base_bdevs_operational": 3, 00:31:15.504 "process": { 00:31:15.504 "type": "rebuild", 00:31:15.504 "target": "spare", 00:31:15.504 "progress": { 00:31:15.504 "blocks": 110592, 00:31:15.504 "percent": 84 00:31:15.504 } 00:31:15.504 }, 00:31:15.504 "base_bdevs_list": [ 00:31:15.504 { 00:31:15.504 "name": "spare", 00:31:15.504 "uuid": "890365b2-f849-5e40-acb6-cce975d94318", 00:31:15.504 "is_configured": true, 00:31:15.504 "data_offset": 0, 00:31:15.504 "data_size": 65536 00:31:15.504 }, 00:31:15.504 { 00:31:15.504 "name": "BaseBdev2", 00:31:15.504 "uuid": "bf6f2846-0eae-508f-89dc-ff1bdec1f5bd", 00:31:15.504 "is_configured": true, 00:31:15.504 "data_offset": 0, 00:31:15.504 "data_size": 65536 00:31:15.504 }, 00:31:15.504 { 00:31:15.504 "name": "BaseBdev3", 00:31:15.504 "uuid": "ae3ded08-5e42-5a30-9c27-690f171de7cc", 00:31:15.504 "is_configured": true, 00:31:15.504 "data_offset": 0, 00:31:15.504 "data_size": 65536 00:31:15.504 } 00:31:15.504 ] 00:31:15.504 }' 00:31:15.504 16:00:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:15.504 16:00:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:15.504 16:00:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:15.504 16:00:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:15.504 16:00:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:16.437 [2024-11-05 16:00:48.554479] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:31:16.437 [2024-11-05 16:00:48.554549] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:31:16.437 [2024-11-05 16:00:48.554593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:16.437 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:16.437 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:16.437 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:16.437 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:16.437 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:16.437 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:16.437 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:16.437 16:00:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.437 16:00:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:16.437 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:16.437 16:00:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.437 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:16.437 "name": "raid_bdev1", 00:31:16.437 "uuid": "fc0334d9-bd4b-48fb-b981-26c91019afff", 00:31:16.437 "strip_size_kb": 64, 00:31:16.437 "state": "online", 00:31:16.437 "raid_level": "raid5f", 00:31:16.437 "superblock": false, 00:31:16.437 "num_base_bdevs": 3, 00:31:16.437 "num_base_bdevs_discovered": 3, 00:31:16.437 "num_base_bdevs_operational": 3, 00:31:16.437 "base_bdevs_list": [ 00:31:16.437 { 00:31:16.437 "name": "spare", 00:31:16.437 "uuid": "890365b2-f849-5e40-acb6-cce975d94318", 00:31:16.437 "is_configured": true, 00:31:16.437 "data_offset": 0, 00:31:16.437 "data_size": 65536 00:31:16.437 }, 00:31:16.437 { 00:31:16.437 "name": "BaseBdev2", 00:31:16.437 "uuid": "bf6f2846-0eae-508f-89dc-ff1bdec1f5bd", 00:31:16.437 "is_configured": true, 00:31:16.437 "data_offset": 0, 00:31:16.437 "data_size": 65536 00:31:16.437 }, 00:31:16.437 { 00:31:16.437 "name": "BaseBdev3", 00:31:16.437 "uuid": "ae3ded08-5e42-5a30-9c27-690f171de7cc", 00:31:16.437 "is_configured": true, 00:31:16.437 "data_offset": 0, 00:31:16.437 "data_size": 65536 00:31:16.437 } 00:31:16.437 ] 00:31:16.437 }' 00:31:16.437 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:16.437 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:31:16.437 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:16.437 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:31:16.437 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:31:16.437 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:16.438 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:16.438 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:16.438 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:16.438 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:16.438 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:16.438 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:16.438 16:00:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.438 16:00:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:16.438 16:00:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.698 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:16.698 "name": "raid_bdev1", 00:31:16.698 "uuid": "fc0334d9-bd4b-48fb-b981-26c91019afff", 00:31:16.698 "strip_size_kb": 64, 00:31:16.698 "state": "online", 00:31:16.698 "raid_level": "raid5f", 00:31:16.698 "superblock": false, 00:31:16.698 "num_base_bdevs": 3, 00:31:16.698 "num_base_bdevs_discovered": 3, 00:31:16.698 "num_base_bdevs_operational": 3, 00:31:16.698 "base_bdevs_list": [ 00:31:16.698 { 00:31:16.698 "name": "spare", 00:31:16.698 "uuid": "890365b2-f849-5e40-acb6-cce975d94318", 00:31:16.698 "is_configured": true, 00:31:16.698 "data_offset": 0, 00:31:16.698 "data_size": 65536 00:31:16.698 }, 00:31:16.698 { 00:31:16.698 "name": "BaseBdev2", 00:31:16.698 "uuid": "bf6f2846-0eae-508f-89dc-ff1bdec1f5bd", 00:31:16.698 "is_configured": true, 00:31:16.698 "data_offset": 0, 00:31:16.698 "data_size": 65536 00:31:16.698 }, 00:31:16.698 { 00:31:16.698 "name": "BaseBdev3", 00:31:16.698 "uuid": "ae3ded08-5e42-5a30-9c27-690f171de7cc", 00:31:16.698 "is_configured": true, 00:31:16.698 "data_offset": 0, 00:31:16.698 "data_size": 65536 00:31:16.698 } 00:31:16.698 ] 00:31:16.698 }' 00:31:16.698 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:16.698 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:16.698 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:16.698 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:16.698 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:16.698 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:16.698 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:16.698 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:16.698 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:16.698 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:16.698 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:16.698 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:16.698 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:16.698 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:16.698 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:16.698 16:00:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.698 16:00:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:16.698 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:16.698 16:00:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.698 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:16.698 "name": "raid_bdev1", 00:31:16.698 "uuid": "fc0334d9-bd4b-48fb-b981-26c91019afff", 00:31:16.698 "strip_size_kb": 64, 00:31:16.698 "state": "online", 00:31:16.698 "raid_level": "raid5f", 00:31:16.698 "superblock": false, 00:31:16.698 "num_base_bdevs": 3, 00:31:16.698 "num_base_bdevs_discovered": 3, 00:31:16.698 "num_base_bdevs_operational": 3, 00:31:16.698 "base_bdevs_list": [ 00:31:16.698 { 00:31:16.698 "name": "spare", 00:31:16.698 "uuid": "890365b2-f849-5e40-acb6-cce975d94318", 00:31:16.698 "is_configured": true, 00:31:16.698 "data_offset": 0, 00:31:16.698 "data_size": 65536 00:31:16.698 }, 00:31:16.698 { 00:31:16.698 "name": "BaseBdev2", 00:31:16.698 "uuid": "bf6f2846-0eae-508f-89dc-ff1bdec1f5bd", 00:31:16.698 "is_configured": true, 00:31:16.698 "data_offset": 0, 00:31:16.698 "data_size": 65536 00:31:16.698 }, 00:31:16.698 { 00:31:16.698 "name": "BaseBdev3", 00:31:16.698 "uuid": "ae3ded08-5e42-5a30-9c27-690f171de7cc", 00:31:16.698 "is_configured": true, 00:31:16.698 "data_offset": 0, 00:31:16.698 "data_size": 65536 00:31:16.698 } 00:31:16.698 ] 00:31:16.698 }' 00:31:16.698 16:00:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:16.698 16:00:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:16.962 16:00:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:16.962 16:00:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.962 16:00:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:16.962 [2024-11-05 16:00:49.263911] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:16.962 [2024-11-05 16:00:49.263933] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:16.962 [2024-11-05 16:00:49.263994] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:16.962 [2024-11-05 16:00:49.264058] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:16.962 [2024-11-05 16:00:49.264070] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:31:16.962 16:00:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.962 16:00:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:16.962 16:00:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.962 16:00:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:16.962 16:00:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:31:16.962 16:00:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.962 16:00:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:31:16.962 16:00:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:31:16.962 16:00:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:31:16.962 16:00:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:31:16.962 16:00:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:31:16.962 16:00:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:31:16.962 16:00:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:16.962 16:00:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:16.962 16:00:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:16.962 16:00:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:31:16.962 16:00:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:16.962 16:00:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:16.962 16:00:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:31:17.220 /dev/nbd0 00:31:17.220 16:00:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:17.220 16:00:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:17.220 16:00:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:31:17.220 16:00:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:31:17.220 16:00:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:31:17.220 16:00:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:31:17.220 16:00:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:31:17.220 16:00:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:31:17.220 16:00:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:31:17.220 16:00:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:31:17.220 16:00:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:17.220 1+0 records in 00:31:17.220 1+0 records out 00:31:17.220 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000139237 s, 29.4 MB/s 00:31:17.220 16:00:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:17.220 16:00:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:31:17.220 16:00:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:17.220 16:00:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:31:17.220 16:00:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:31:17.220 16:00:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:17.220 16:00:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:17.220 16:00:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:31:17.497 /dev/nbd1 00:31:17.497 16:00:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:17.497 16:00:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:17.497 16:00:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:31:17.497 16:00:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:31:17.497 16:00:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:31:17.497 16:00:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:31:17.497 16:00:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:31:17.497 16:00:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:31:17.497 16:00:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:31:17.497 16:00:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:31:17.497 16:00:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:17.497 1+0 records in 00:31:17.497 1+0 records out 00:31:17.497 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266976 s, 15.3 MB/s 00:31:17.497 16:00:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:17.497 16:00:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:31:17.497 16:00:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:17.497 16:00:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:31:17.497 16:00:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:31:17.497 16:00:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:17.497 16:00:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:17.497 16:00:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:31:17.498 16:00:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:31:17.498 16:00:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:31:17.498 16:00:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:17.498 16:00:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:17.498 16:00:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:31:17.498 16:00:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:17.498 16:00:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:31:17.756 16:00:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:17.756 16:00:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:17.756 16:00:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:17.756 16:00:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:17.756 16:00:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:17.756 16:00:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:17.756 16:00:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:31:17.756 16:00:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:31:17.756 16:00:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:17.756 16:00:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:31:18.014 16:00:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:18.014 16:00:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:18.014 16:00:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:18.014 16:00:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:18.014 16:00:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:18.014 16:00:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:18.014 16:00:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:31:18.014 16:00:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:31:18.014 16:00:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:31:18.014 16:00:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 79004 00:31:18.014 16:00:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 79004 ']' 00:31:18.014 16:00:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 79004 00:31:18.014 16:00:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:31:18.014 16:00:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:18.014 16:00:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79004 00:31:18.014 16:00:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:18.014 killing process with pid 79004 00:31:18.014 16:00:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:18.014 16:00:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79004' 00:31:18.014 Received shutdown signal, test time was about 60.000000 seconds 00:31:18.014 00:31:18.014 Latency(us) 00:31:18.014 [2024-11-05T16:00:50.429Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:18.014 [2024-11-05T16:00:50.429Z] =================================================================================================================== 00:31:18.014 [2024-11-05T16:00:50.429Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:18.014 16:00:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 79004 00:31:18.014 16:00:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 79004 00:31:18.014 [2024-11-05 16:00:50.312004] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:18.272 [2024-11-05 16:00:50.500067] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:18.839 ************************************ 00:31:18.839 END TEST raid5f_rebuild_test 00:31:18.839 ************************************ 00:31:18.839 16:00:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:31:18.839 00:31:18.839 real 0m12.990s 00:31:18.839 user 0m15.757s 00:31:18.839 sys 0m1.407s 00:31:18.839 16:00:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:18.839 16:00:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:18.839 16:00:51 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:31:18.839 16:00:51 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:31:18.839 16:00:51 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:18.839 16:00:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:18.839 ************************************ 00:31:18.839 START TEST raid5f_rebuild_test_sb 00:31:18.839 ************************************ 00:31:18.839 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 true false true 00:31:18.839 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:31:18.839 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:31:18.839 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:31:18.839 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:31:18.839 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:31:18.839 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:31:18.839 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:18.839 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:31:18.839 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:31:18.839 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:18.839 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:31:18.839 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:31:18.839 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:18.839 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:31:18.839 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:31:18.839 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:18.840 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:31:18.840 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:31:18.840 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:31:18.840 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:31:18.840 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:31:18.840 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:31:18.840 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:31:18.840 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:31:18.840 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:31:18.840 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:31:18.840 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:31:18.840 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:31:18.840 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:31:18.840 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=79426 00:31:18.840 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 79426 00:31:18.840 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 79426 ']' 00:31:18.840 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:18.840 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:18.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:18.840 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:18.840 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:18.840 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:18.840 16:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:18.840 [2024-11-05 16:00:51.139816] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:31:18.840 [2024-11-05 16:00:51.139922] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79426 ] 00:31:18.840 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:18.840 Zero copy mechanism will not be used. 00:31:19.098 [2024-11-05 16:00:51.288729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:19.098 [2024-11-05 16:00:51.368610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:19.098 [2024-11-05 16:00:51.474362] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:19.098 [2024-11-05 16:00:51.474410] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:19.664 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:19.664 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:31:19.664 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:31:19.664 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:31:19.665 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.665 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:19.665 BaseBdev1_malloc 00:31:19.665 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.665 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:19.665 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.665 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:19.665 [2024-11-05 16:00:52.034077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:19.665 [2024-11-05 16:00:52.034128] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:19.665 [2024-11-05 16:00:52.034145] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:31:19.665 [2024-11-05 16:00:52.034154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:19.665 [2024-11-05 16:00:52.035829] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:19.665 [2024-11-05 16:00:52.035869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:19.665 BaseBdev1 00:31:19.665 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.665 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:31:19.665 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:31:19.665 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.665 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:19.665 BaseBdev2_malloc 00:31:19.665 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.665 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:31:19.665 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.665 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:19.665 [2024-11-05 16:00:52.064802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:31:19.665 [2024-11-05 16:00:52.064850] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:19.665 [2024-11-05 16:00:52.064862] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:31:19.665 [2024-11-05 16:00:52.064870] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:19.665 [2024-11-05 16:00:52.066466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:19.665 [2024-11-05 16:00:52.066492] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:19.665 BaseBdev2 00:31:19.665 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.665 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:31:19.665 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:31:19.665 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.665 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:19.923 BaseBdev3_malloc 00:31:19.923 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.923 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:31:19.923 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.923 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:19.923 [2024-11-05 16:00:52.110938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:31:19.923 [2024-11-05 16:00:52.110977] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:19.923 [2024-11-05 16:00:52.110994] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:31:19.923 [2024-11-05 16:00:52.111003] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:19.923 [2024-11-05 16:00:52.112629] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:19.923 [2024-11-05 16:00:52.112657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:31:19.923 BaseBdev3 00:31:19.923 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.923 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:31:19.923 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.923 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:19.923 spare_malloc 00:31:19.923 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.923 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:31:19.923 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.923 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:19.923 spare_delay 00:31:19.923 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.923 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:31:19.923 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.923 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:19.923 [2024-11-05 16:00:52.149737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:19.923 [2024-11-05 16:00:52.149770] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:19.923 [2024-11-05 16:00:52.149782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:31:19.923 [2024-11-05 16:00:52.149790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:19.923 [2024-11-05 16:00:52.151579] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:19.923 [2024-11-05 16:00:52.151608] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:19.923 spare 00:31:19.923 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.923 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:31:19.923 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.923 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:19.923 [2024-11-05 16:00:52.157797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:19.923 [2024-11-05 16:00:52.159285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:19.923 [2024-11-05 16:00:52.159338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:19.923 [2024-11-05 16:00:52.159469] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:31:19.923 [2024-11-05 16:00:52.159484] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:19.923 [2024-11-05 16:00:52.159691] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:31:19.923 [2024-11-05 16:00:52.162690] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:31:19.923 [2024-11-05 16:00:52.162709] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:31:19.923 [2024-11-05 16:00:52.162865] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:19.923 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.924 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:19.924 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:19.924 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:19.924 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:19.924 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:19.924 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:19.924 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:19.924 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:19.924 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:19.924 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:19.924 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:19.924 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:19.924 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.924 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:19.924 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.924 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:19.924 "name": "raid_bdev1", 00:31:19.924 "uuid": "c8675a28-df87-4f96-90d5-f4d99a799ddb", 00:31:19.924 "strip_size_kb": 64, 00:31:19.924 "state": "online", 00:31:19.924 "raid_level": "raid5f", 00:31:19.924 "superblock": true, 00:31:19.924 "num_base_bdevs": 3, 00:31:19.924 "num_base_bdevs_discovered": 3, 00:31:19.924 "num_base_bdevs_operational": 3, 00:31:19.924 "base_bdevs_list": [ 00:31:19.924 { 00:31:19.924 "name": "BaseBdev1", 00:31:19.924 "uuid": "9fff67b4-87fd-5db5-9585-6c1400b1a930", 00:31:19.924 "is_configured": true, 00:31:19.924 "data_offset": 2048, 00:31:19.924 "data_size": 63488 00:31:19.924 }, 00:31:19.924 { 00:31:19.924 "name": "BaseBdev2", 00:31:19.924 "uuid": "4bdcc6c6-fb4d-5ed9-baca-074bf74eaeb4", 00:31:19.924 "is_configured": true, 00:31:19.924 "data_offset": 2048, 00:31:19.924 "data_size": 63488 00:31:19.924 }, 00:31:19.924 { 00:31:19.924 "name": "BaseBdev3", 00:31:19.924 "uuid": "412464e9-cd85-51c1-8b8c-ab2cf076ecc7", 00:31:19.924 "is_configured": true, 00:31:19.924 "data_offset": 2048, 00:31:19.924 "data_size": 63488 00:31:19.924 } 00:31:19.924 ] 00:31:19.924 }' 00:31:19.924 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:19.924 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:20.182 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:20.182 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:31:20.182 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.182 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:20.182 [2024-11-05 16:00:52.463074] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:20.182 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.182 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:31:20.182 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:20.182 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:31:20.182 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.182 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:20.182 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.182 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:31:20.182 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:31:20.182 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:31:20.182 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:31:20.182 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:31:20.182 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:31:20.182 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:31:20.182 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:20.182 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:20.182 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:20.182 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:31:20.182 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:20.183 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:20.183 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:31:20.441 [2024-11-05 16:00:52.718974] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:31:20.441 /dev/nbd0 00:31:20.441 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:20.441 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:20.441 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:31:20.441 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:31:20.441 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:31:20.441 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:31:20.441 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:31:20.441 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:31:20.441 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:31:20.441 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:31:20.441 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:20.441 1+0 records in 00:31:20.441 1+0 records out 00:31:20.441 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261214 s, 15.7 MB/s 00:31:20.441 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:20.441 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:31:20.441 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:20.441 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:31:20.441 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:31:20.441 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:20.441 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:20.441 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:31:20.441 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:31:20.441 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:31:20.441 16:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:31:20.699 496+0 records in 00:31:20.699 496+0 records out 00:31:20.699 65011712 bytes (65 MB, 62 MiB) copied, 0.314603 s, 207 MB/s 00:31:20.699 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:31:20.699 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:31:20.699 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:20.699 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:20.699 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:31:20.699 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:20.699 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:31:20.957 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:20.957 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:20.957 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:20.957 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:20.957 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:20.957 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:20.957 [2024-11-05 16:00:53.297923] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:20.957 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:31:20.957 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:31:20.957 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:31:20.957 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.957 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:20.957 [2024-11-05 16:00:53.305213] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:20.957 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.957 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:20.957 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:20.957 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:20.957 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:20.957 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:20.957 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:20.957 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:20.957 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:20.957 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:20.957 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:20.957 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:20.957 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:20.957 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.957 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:20.957 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.957 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:20.957 "name": "raid_bdev1", 00:31:20.957 "uuid": "c8675a28-df87-4f96-90d5-f4d99a799ddb", 00:31:20.957 "strip_size_kb": 64, 00:31:20.957 "state": "online", 00:31:20.957 "raid_level": "raid5f", 00:31:20.957 "superblock": true, 00:31:20.957 "num_base_bdevs": 3, 00:31:20.957 "num_base_bdevs_discovered": 2, 00:31:20.957 "num_base_bdevs_operational": 2, 00:31:20.957 "base_bdevs_list": [ 00:31:20.957 { 00:31:20.957 "name": null, 00:31:20.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:20.957 "is_configured": false, 00:31:20.957 "data_offset": 0, 00:31:20.957 "data_size": 63488 00:31:20.957 }, 00:31:20.957 { 00:31:20.957 "name": "BaseBdev2", 00:31:20.957 "uuid": "4bdcc6c6-fb4d-5ed9-baca-074bf74eaeb4", 00:31:20.957 "is_configured": true, 00:31:20.957 "data_offset": 2048, 00:31:20.957 "data_size": 63488 00:31:20.957 }, 00:31:20.957 { 00:31:20.957 "name": "BaseBdev3", 00:31:20.957 "uuid": "412464e9-cd85-51c1-8b8c-ab2cf076ecc7", 00:31:20.957 "is_configured": true, 00:31:20.957 "data_offset": 2048, 00:31:20.957 "data_size": 63488 00:31:20.957 } 00:31:20.957 ] 00:31:20.957 }' 00:31:20.957 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:20.957 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:21.216 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:31:21.216 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.216 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:21.216 [2024-11-05 16:00:53.609279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:21.216 [2024-11-05 16:00:53.617736] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:31:21.216 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.216 16:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:31:21.216 [2024-11-05 16:00:53.621984] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:22.589 16:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:22.589 16:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:22.589 16:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:22.589 16:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:22.589 16:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:22.589 16:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:22.589 16:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:22.589 16:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.589 16:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:22.589 16:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.589 16:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:22.589 "name": "raid_bdev1", 00:31:22.589 "uuid": "c8675a28-df87-4f96-90d5-f4d99a799ddb", 00:31:22.589 "strip_size_kb": 64, 00:31:22.589 "state": "online", 00:31:22.589 "raid_level": "raid5f", 00:31:22.589 "superblock": true, 00:31:22.589 "num_base_bdevs": 3, 00:31:22.589 "num_base_bdevs_discovered": 3, 00:31:22.589 "num_base_bdevs_operational": 3, 00:31:22.589 "process": { 00:31:22.589 "type": "rebuild", 00:31:22.589 "target": "spare", 00:31:22.589 "progress": { 00:31:22.589 "blocks": 20480, 00:31:22.589 "percent": 16 00:31:22.589 } 00:31:22.589 }, 00:31:22.589 "base_bdevs_list": [ 00:31:22.589 { 00:31:22.589 "name": "spare", 00:31:22.589 "uuid": "44ad60c6-2bea-5a59-99e1-3b6d481c11a0", 00:31:22.589 "is_configured": true, 00:31:22.589 "data_offset": 2048, 00:31:22.589 "data_size": 63488 00:31:22.589 }, 00:31:22.589 { 00:31:22.589 "name": "BaseBdev2", 00:31:22.589 "uuid": "4bdcc6c6-fb4d-5ed9-baca-074bf74eaeb4", 00:31:22.589 "is_configured": true, 00:31:22.589 "data_offset": 2048, 00:31:22.589 "data_size": 63488 00:31:22.589 }, 00:31:22.589 { 00:31:22.589 "name": "BaseBdev3", 00:31:22.589 "uuid": "412464e9-cd85-51c1-8b8c-ab2cf076ecc7", 00:31:22.589 "is_configured": true, 00:31:22.589 "data_offset": 2048, 00:31:22.589 "data_size": 63488 00:31:22.589 } 00:31:22.589 ] 00:31:22.589 }' 00:31:22.589 16:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:22.589 16:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:22.589 16:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:22.589 16:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:22.589 16:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:31:22.589 16:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.589 16:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:22.589 [2024-11-05 16:00:54.722661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:22.589 [2024-11-05 16:00:54.730299] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:22.589 [2024-11-05 16:00:54.730343] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:22.589 [2024-11-05 16:00:54.730358] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:22.589 [2024-11-05 16:00:54.730364] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:22.589 16:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.589 16:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:22.589 16:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:22.589 16:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:22.589 16:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:22.589 16:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:22.589 16:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:22.589 16:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:22.589 16:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:22.589 16:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:22.589 16:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:22.589 16:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:22.589 16:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.589 16:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:22.589 16:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:22.589 16:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.589 16:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:22.589 "name": "raid_bdev1", 00:31:22.589 "uuid": "c8675a28-df87-4f96-90d5-f4d99a799ddb", 00:31:22.589 "strip_size_kb": 64, 00:31:22.589 "state": "online", 00:31:22.589 "raid_level": "raid5f", 00:31:22.590 "superblock": true, 00:31:22.590 "num_base_bdevs": 3, 00:31:22.590 "num_base_bdevs_discovered": 2, 00:31:22.590 "num_base_bdevs_operational": 2, 00:31:22.590 "base_bdevs_list": [ 00:31:22.590 { 00:31:22.590 "name": null, 00:31:22.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:22.590 "is_configured": false, 00:31:22.590 "data_offset": 0, 00:31:22.590 "data_size": 63488 00:31:22.590 }, 00:31:22.590 { 00:31:22.590 "name": "BaseBdev2", 00:31:22.590 "uuid": "4bdcc6c6-fb4d-5ed9-baca-074bf74eaeb4", 00:31:22.590 "is_configured": true, 00:31:22.590 "data_offset": 2048, 00:31:22.590 "data_size": 63488 00:31:22.590 }, 00:31:22.590 { 00:31:22.590 "name": "BaseBdev3", 00:31:22.590 "uuid": "412464e9-cd85-51c1-8b8c-ab2cf076ecc7", 00:31:22.590 "is_configured": true, 00:31:22.590 "data_offset": 2048, 00:31:22.590 "data_size": 63488 00:31:22.590 } 00:31:22.590 ] 00:31:22.590 }' 00:31:22.590 16:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:22.590 16:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:22.848 16:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:22.848 16:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:22.848 16:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:22.848 16:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:22.848 16:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:22.848 16:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:22.848 16:00:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.848 16:00:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:22.848 16:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:22.848 16:00:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.848 16:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:22.848 "name": "raid_bdev1", 00:31:22.848 "uuid": "c8675a28-df87-4f96-90d5-f4d99a799ddb", 00:31:22.848 "strip_size_kb": 64, 00:31:22.848 "state": "online", 00:31:22.848 "raid_level": "raid5f", 00:31:22.848 "superblock": true, 00:31:22.848 "num_base_bdevs": 3, 00:31:22.848 "num_base_bdevs_discovered": 2, 00:31:22.848 "num_base_bdevs_operational": 2, 00:31:22.848 "base_bdevs_list": [ 00:31:22.848 { 00:31:22.848 "name": null, 00:31:22.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:22.848 "is_configured": false, 00:31:22.848 "data_offset": 0, 00:31:22.848 "data_size": 63488 00:31:22.848 }, 00:31:22.848 { 00:31:22.848 "name": "BaseBdev2", 00:31:22.848 "uuid": "4bdcc6c6-fb4d-5ed9-baca-074bf74eaeb4", 00:31:22.848 "is_configured": true, 00:31:22.848 "data_offset": 2048, 00:31:22.848 "data_size": 63488 00:31:22.848 }, 00:31:22.848 { 00:31:22.848 "name": "BaseBdev3", 00:31:22.848 "uuid": "412464e9-cd85-51c1-8b8c-ab2cf076ecc7", 00:31:22.848 "is_configured": true, 00:31:22.848 "data_offset": 2048, 00:31:22.848 "data_size": 63488 00:31:22.848 } 00:31:22.848 ] 00:31:22.848 }' 00:31:22.848 16:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:22.848 16:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:22.848 16:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:22.848 16:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:22.848 16:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:31:22.848 16:00:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.848 16:00:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:22.848 [2024-11-05 16:00:55.160333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:22.848 [2024-11-05 16:00:55.168666] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:31:22.848 16:00:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.848 16:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:31:22.848 [2024-11-05 16:00:55.172856] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:23.783 16:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:23.783 16:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:23.783 16:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:23.783 16:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:23.783 16:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:23.783 16:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:23.783 16:00:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.783 16:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:23.783 16:00:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:23.783 16:00:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.042 16:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:24.042 "name": "raid_bdev1", 00:31:24.042 "uuid": "c8675a28-df87-4f96-90d5-f4d99a799ddb", 00:31:24.042 "strip_size_kb": 64, 00:31:24.042 "state": "online", 00:31:24.042 "raid_level": "raid5f", 00:31:24.042 "superblock": true, 00:31:24.042 "num_base_bdevs": 3, 00:31:24.042 "num_base_bdevs_discovered": 3, 00:31:24.042 "num_base_bdevs_operational": 3, 00:31:24.042 "process": { 00:31:24.042 "type": "rebuild", 00:31:24.042 "target": "spare", 00:31:24.042 "progress": { 00:31:24.042 "blocks": 20480, 00:31:24.042 "percent": 16 00:31:24.042 } 00:31:24.042 }, 00:31:24.042 "base_bdevs_list": [ 00:31:24.042 { 00:31:24.042 "name": "spare", 00:31:24.042 "uuid": "44ad60c6-2bea-5a59-99e1-3b6d481c11a0", 00:31:24.042 "is_configured": true, 00:31:24.042 "data_offset": 2048, 00:31:24.042 "data_size": 63488 00:31:24.042 }, 00:31:24.042 { 00:31:24.042 "name": "BaseBdev2", 00:31:24.042 "uuid": "4bdcc6c6-fb4d-5ed9-baca-074bf74eaeb4", 00:31:24.042 "is_configured": true, 00:31:24.042 "data_offset": 2048, 00:31:24.042 "data_size": 63488 00:31:24.042 }, 00:31:24.042 { 00:31:24.042 "name": "BaseBdev3", 00:31:24.042 "uuid": "412464e9-cd85-51c1-8b8c-ab2cf076ecc7", 00:31:24.042 "is_configured": true, 00:31:24.042 "data_offset": 2048, 00:31:24.042 "data_size": 63488 00:31:24.042 } 00:31:24.042 ] 00:31:24.042 }' 00:31:24.042 16:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:24.042 16:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:24.042 16:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:24.042 16:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:24.042 16:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:31:24.042 16:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:31:24.042 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:31:24.042 16:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:31:24.042 16:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:31:24.042 16:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=432 00:31:24.042 16:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:24.042 16:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:24.042 16:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:24.042 16:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:24.042 16:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:24.042 16:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:24.042 16:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:24.042 16:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:24.042 16:00:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.042 16:00:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:24.042 16:00:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.042 16:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:24.042 "name": "raid_bdev1", 00:31:24.042 "uuid": "c8675a28-df87-4f96-90d5-f4d99a799ddb", 00:31:24.042 "strip_size_kb": 64, 00:31:24.042 "state": "online", 00:31:24.042 "raid_level": "raid5f", 00:31:24.042 "superblock": true, 00:31:24.042 "num_base_bdevs": 3, 00:31:24.042 "num_base_bdevs_discovered": 3, 00:31:24.042 "num_base_bdevs_operational": 3, 00:31:24.042 "process": { 00:31:24.042 "type": "rebuild", 00:31:24.042 "target": "spare", 00:31:24.042 "progress": { 00:31:24.042 "blocks": 20480, 00:31:24.042 "percent": 16 00:31:24.042 } 00:31:24.042 }, 00:31:24.042 "base_bdevs_list": [ 00:31:24.042 { 00:31:24.042 "name": "spare", 00:31:24.042 "uuid": "44ad60c6-2bea-5a59-99e1-3b6d481c11a0", 00:31:24.042 "is_configured": true, 00:31:24.042 "data_offset": 2048, 00:31:24.042 "data_size": 63488 00:31:24.042 }, 00:31:24.042 { 00:31:24.042 "name": "BaseBdev2", 00:31:24.042 "uuid": "4bdcc6c6-fb4d-5ed9-baca-074bf74eaeb4", 00:31:24.042 "is_configured": true, 00:31:24.042 "data_offset": 2048, 00:31:24.042 "data_size": 63488 00:31:24.042 }, 00:31:24.042 { 00:31:24.042 "name": "BaseBdev3", 00:31:24.042 "uuid": "412464e9-cd85-51c1-8b8c-ab2cf076ecc7", 00:31:24.042 "is_configured": true, 00:31:24.042 "data_offset": 2048, 00:31:24.042 "data_size": 63488 00:31:24.042 } 00:31:24.042 ] 00:31:24.042 }' 00:31:24.042 16:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:24.042 16:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:24.042 16:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:24.042 16:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:24.042 16:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:24.975 16:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:24.975 16:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:24.975 16:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:24.975 16:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:24.975 16:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:24.975 16:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:24.975 16:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:24.975 16:00:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.975 16:00:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:24.975 16:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:24.975 16:00:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.234 16:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:25.234 "name": "raid_bdev1", 00:31:25.234 "uuid": "c8675a28-df87-4f96-90d5-f4d99a799ddb", 00:31:25.234 "strip_size_kb": 64, 00:31:25.234 "state": "online", 00:31:25.234 "raid_level": "raid5f", 00:31:25.234 "superblock": true, 00:31:25.234 "num_base_bdevs": 3, 00:31:25.234 "num_base_bdevs_discovered": 3, 00:31:25.234 "num_base_bdevs_operational": 3, 00:31:25.234 "process": { 00:31:25.234 "type": "rebuild", 00:31:25.234 "target": "spare", 00:31:25.234 "progress": { 00:31:25.234 "blocks": 43008, 00:31:25.234 "percent": 33 00:31:25.234 } 00:31:25.234 }, 00:31:25.234 "base_bdevs_list": [ 00:31:25.234 { 00:31:25.234 "name": "spare", 00:31:25.234 "uuid": "44ad60c6-2bea-5a59-99e1-3b6d481c11a0", 00:31:25.234 "is_configured": true, 00:31:25.234 "data_offset": 2048, 00:31:25.234 "data_size": 63488 00:31:25.234 }, 00:31:25.234 { 00:31:25.234 "name": "BaseBdev2", 00:31:25.234 "uuid": "4bdcc6c6-fb4d-5ed9-baca-074bf74eaeb4", 00:31:25.234 "is_configured": true, 00:31:25.234 "data_offset": 2048, 00:31:25.234 "data_size": 63488 00:31:25.234 }, 00:31:25.234 { 00:31:25.234 "name": "BaseBdev3", 00:31:25.234 "uuid": "412464e9-cd85-51c1-8b8c-ab2cf076ecc7", 00:31:25.234 "is_configured": true, 00:31:25.234 "data_offset": 2048, 00:31:25.234 "data_size": 63488 00:31:25.234 } 00:31:25.234 ] 00:31:25.234 }' 00:31:25.234 16:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:25.234 16:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:25.234 16:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:25.234 16:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:25.234 16:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:26.169 16:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:26.169 16:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:26.169 16:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:26.169 16:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:26.169 16:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:26.169 16:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:26.169 16:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:26.169 16:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.169 16:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:26.169 16:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:26.169 16:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.169 16:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:26.169 "name": "raid_bdev1", 00:31:26.169 "uuid": "c8675a28-df87-4f96-90d5-f4d99a799ddb", 00:31:26.169 "strip_size_kb": 64, 00:31:26.169 "state": "online", 00:31:26.169 "raid_level": "raid5f", 00:31:26.169 "superblock": true, 00:31:26.169 "num_base_bdevs": 3, 00:31:26.169 "num_base_bdevs_discovered": 3, 00:31:26.169 "num_base_bdevs_operational": 3, 00:31:26.169 "process": { 00:31:26.169 "type": "rebuild", 00:31:26.169 "target": "spare", 00:31:26.169 "progress": { 00:31:26.169 "blocks": 65536, 00:31:26.169 "percent": 51 00:31:26.169 } 00:31:26.169 }, 00:31:26.169 "base_bdevs_list": [ 00:31:26.169 { 00:31:26.169 "name": "spare", 00:31:26.169 "uuid": "44ad60c6-2bea-5a59-99e1-3b6d481c11a0", 00:31:26.169 "is_configured": true, 00:31:26.169 "data_offset": 2048, 00:31:26.169 "data_size": 63488 00:31:26.169 }, 00:31:26.169 { 00:31:26.169 "name": "BaseBdev2", 00:31:26.169 "uuid": "4bdcc6c6-fb4d-5ed9-baca-074bf74eaeb4", 00:31:26.169 "is_configured": true, 00:31:26.169 "data_offset": 2048, 00:31:26.169 "data_size": 63488 00:31:26.169 }, 00:31:26.169 { 00:31:26.169 "name": "BaseBdev3", 00:31:26.169 "uuid": "412464e9-cd85-51c1-8b8c-ab2cf076ecc7", 00:31:26.169 "is_configured": true, 00:31:26.169 "data_offset": 2048, 00:31:26.169 "data_size": 63488 00:31:26.169 } 00:31:26.169 ] 00:31:26.169 }' 00:31:26.169 16:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:26.169 16:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:26.169 16:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:26.169 16:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:26.169 16:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:27.545 16:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:27.545 16:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:27.545 16:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:27.545 16:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:27.545 16:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:27.545 16:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:27.545 16:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:27.545 16:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.545 16:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:27.545 16:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:27.545 16:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.545 16:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:27.545 "name": "raid_bdev1", 00:31:27.545 "uuid": "c8675a28-df87-4f96-90d5-f4d99a799ddb", 00:31:27.545 "strip_size_kb": 64, 00:31:27.545 "state": "online", 00:31:27.545 "raid_level": "raid5f", 00:31:27.545 "superblock": true, 00:31:27.545 "num_base_bdevs": 3, 00:31:27.545 "num_base_bdevs_discovered": 3, 00:31:27.545 "num_base_bdevs_operational": 3, 00:31:27.545 "process": { 00:31:27.545 "type": "rebuild", 00:31:27.545 "target": "spare", 00:31:27.545 "progress": { 00:31:27.545 "blocks": 88064, 00:31:27.545 "percent": 69 00:31:27.545 } 00:31:27.545 }, 00:31:27.545 "base_bdevs_list": [ 00:31:27.545 { 00:31:27.545 "name": "spare", 00:31:27.545 "uuid": "44ad60c6-2bea-5a59-99e1-3b6d481c11a0", 00:31:27.545 "is_configured": true, 00:31:27.545 "data_offset": 2048, 00:31:27.545 "data_size": 63488 00:31:27.545 }, 00:31:27.545 { 00:31:27.545 "name": "BaseBdev2", 00:31:27.545 "uuid": "4bdcc6c6-fb4d-5ed9-baca-074bf74eaeb4", 00:31:27.545 "is_configured": true, 00:31:27.545 "data_offset": 2048, 00:31:27.545 "data_size": 63488 00:31:27.545 }, 00:31:27.545 { 00:31:27.545 "name": "BaseBdev3", 00:31:27.545 "uuid": "412464e9-cd85-51c1-8b8c-ab2cf076ecc7", 00:31:27.545 "is_configured": true, 00:31:27.545 "data_offset": 2048, 00:31:27.545 "data_size": 63488 00:31:27.545 } 00:31:27.545 ] 00:31:27.545 }' 00:31:27.545 16:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:27.545 16:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:27.545 16:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:27.545 16:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:27.545 16:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:28.479 16:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:28.479 16:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:28.479 16:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:28.479 16:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:28.479 16:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:28.479 16:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:28.479 16:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:28.479 16:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:28.479 16:01:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.479 16:01:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:28.479 16:01:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.479 16:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:28.479 "name": "raid_bdev1", 00:31:28.479 "uuid": "c8675a28-df87-4f96-90d5-f4d99a799ddb", 00:31:28.479 "strip_size_kb": 64, 00:31:28.479 "state": "online", 00:31:28.479 "raid_level": "raid5f", 00:31:28.479 "superblock": true, 00:31:28.479 "num_base_bdevs": 3, 00:31:28.479 "num_base_bdevs_discovered": 3, 00:31:28.479 "num_base_bdevs_operational": 3, 00:31:28.479 "process": { 00:31:28.479 "type": "rebuild", 00:31:28.479 "target": "spare", 00:31:28.479 "progress": { 00:31:28.479 "blocks": 110592, 00:31:28.479 "percent": 87 00:31:28.479 } 00:31:28.479 }, 00:31:28.479 "base_bdevs_list": [ 00:31:28.479 { 00:31:28.479 "name": "spare", 00:31:28.479 "uuid": "44ad60c6-2bea-5a59-99e1-3b6d481c11a0", 00:31:28.479 "is_configured": true, 00:31:28.479 "data_offset": 2048, 00:31:28.479 "data_size": 63488 00:31:28.479 }, 00:31:28.479 { 00:31:28.479 "name": "BaseBdev2", 00:31:28.479 "uuid": "4bdcc6c6-fb4d-5ed9-baca-074bf74eaeb4", 00:31:28.479 "is_configured": true, 00:31:28.479 "data_offset": 2048, 00:31:28.479 "data_size": 63488 00:31:28.479 }, 00:31:28.479 { 00:31:28.479 "name": "BaseBdev3", 00:31:28.479 "uuid": "412464e9-cd85-51c1-8b8c-ab2cf076ecc7", 00:31:28.479 "is_configured": true, 00:31:28.479 "data_offset": 2048, 00:31:28.479 "data_size": 63488 00:31:28.479 } 00:31:28.479 ] 00:31:28.479 }' 00:31:28.479 16:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:28.479 16:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:28.479 16:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:28.479 16:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:28.479 16:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:29.047 [2024-11-05 16:01:01.417126] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:31:29.047 [2024-11-05 16:01:01.417204] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:31:29.047 [2024-11-05 16:01:01.417308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:29.615 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:29.615 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:29.615 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:29.615 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:29.615 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:29.615 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:29.615 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:29.615 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:29.615 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.615 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:29.615 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.615 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:29.615 "name": "raid_bdev1", 00:31:29.615 "uuid": "c8675a28-df87-4f96-90d5-f4d99a799ddb", 00:31:29.615 "strip_size_kb": 64, 00:31:29.615 "state": "online", 00:31:29.615 "raid_level": "raid5f", 00:31:29.615 "superblock": true, 00:31:29.615 "num_base_bdevs": 3, 00:31:29.615 "num_base_bdevs_discovered": 3, 00:31:29.615 "num_base_bdevs_operational": 3, 00:31:29.616 "base_bdevs_list": [ 00:31:29.616 { 00:31:29.616 "name": "spare", 00:31:29.616 "uuid": "44ad60c6-2bea-5a59-99e1-3b6d481c11a0", 00:31:29.616 "is_configured": true, 00:31:29.616 "data_offset": 2048, 00:31:29.616 "data_size": 63488 00:31:29.616 }, 00:31:29.616 { 00:31:29.616 "name": "BaseBdev2", 00:31:29.616 "uuid": "4bdcc6c6-fb4d-5ed9-baca-074bf74eaeb4", 00:31:29.616 "is_configured": true, 00:31:29.616 "data_offset": 2048, 00:31:29.616 "data_size": 63488 00:31:29.616 }, 00:31:29.616 { 00:31:29.616 "name": "BaseBdev3", 00:31:29.616 "uuid": "412464e9-cd85-51c1-8b8c-ab2cf076ecc7", 00:31:29.616 "is_configured": true, 00:31:29.616 "data_offset": 2048, 00:31:29.616 "data_size": 63488 00:31:29.616 } 00:31:29.616 ] 00:31:29.616 }' 00:31:29.616 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:29.616 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:31:29.616 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:29.616 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:31:29.616 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:31:29.616 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:29.616 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:29.616 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:29.616 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:29.616 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:29.616 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:29.616 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:29.616 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.616 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:29.616 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.616 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:29.616 "name": "raid_bdev1", 00:31:29.616 "uuid": "c8675a28-df87-4f96-90d5-f4d99a799ddb", 00:31:29.616 "strip_size_kb": 64, 00:31:29.616 "state": "online", 00:31:29.616 "raid_level": "raid5f", 00:31:29.616 "superblock": true, 00:31:29.616 "num_base_bdevs": 3, 00:31:29.616 "num_base_bdevs_discovered": 3, 00:31:29.616 "num_base_bdevs_operational": 3, 00:31:29.616 "base_bdevs_list": [ 00:31:29.616 { 00:31:29.616 "name": "spare", 00:31:29.616 "uuid": "44ad60c6-2bea-5a59-99e1-3b6d481c11a0", 00:31:29.616 "is_configured": true, 00:31:29.616 "data_offset": 2048, 00:31:29.616 "data_size": 63488 00:31:29.616 }, 00:31:29.616 { 00:31:29.616 "name": "BaseBdev2", 00:31:29.616 "uuid": "4bdcc6c6-fb4d-5ed9-baca-074bf74eaeb4", 00:31:29.616 "is_configured": true, 00:31:29.616 "data_offset": 2048, 00:31:29.616 "data_size": 63488 00:31:29.616 }, 00:31:29.616 { 00:31:29.616 "name": "BaseBdev3", 00:31:29.616 "uuid": "412464e9-cd85-51c1-8b8c-ab2cf076ecc7", 00:31:29.616 "is_configured": true, 00:31:29.616 "data_offset": 2048, 00:31:29.616 "data_size": 63488 00:31:29.616 } 00:31:29.616 ] 00:31:29.616 }' 00:31:29.616 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:29.616 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:29.616 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:29.616 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:29.616 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:29.616 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:29.616 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:29.616 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:29.616 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:29.616 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:29.616 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:29.616 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:29.616 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:29.616 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:29.616 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:29.616 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:29.616 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.616 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:29.616 16:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.616 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:29.616 "name": "raid_bdev1", 00:31:29.616 "uuid": "c8675a28-df87-4f96-90d5-f4d99a799ddb", 00:31:29.616 "strip_size_kb": 64, 00:31:29.616 "state": "online", 00:31:29.616 "raid_level": "raid5f", 00:31:29.616 "superblock": true, 00:31:29.616 "num_base_bdevs": 3, 00:31:29.616 "num_base_bdevs_discovered": 3, 00:31:29.616 "num_base_bdevs_operational": 3, 00:31:29.616 "base_bdevs_list": [ 00:31:29.616 { 00:31:29.616 "name": "spare", 00:31:29.616 "uuid": "44ad60c6-2bea-5a59-99e1-3b6d481c11a0", 00:31:29.616 "is_configured": true, 00:31:29.616 "data_offset": 2048, 00:31:29.616 "data_size": 63488 00:31:29.616 }, 00:31:29.616 { 00:31:29.616 "name": "BaseBdev2", 00:31:29.616 "uuid": "4bdcc6c6-fb4d-5ed9-baca-074bf74eaeb4", 00:31:29.616 "is_configured": true, 00:31:29.616 "data_offset": 2048, 00:31:29.616 "data_size": 63488 00:31:29.616 }, 00:31:29.616 { 00:31:29.616 "name": "BaseBdev3", 00:31:29.616 "uuid": "412464e9-cd85-51c1-8b8c-ab2cf076ecc7", 00:31:29.616 "is_configured": true, 00:31:29.616 "data_offset": 2048, 00:31:29.616 "data_size": 63488 00:31:29.616 } 00:31:29.616 ] 00:31:29.616 }' 00:31:29.616 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:29.616 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:30.182 [2024-11-05 16:01:02.299048] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:30.182 [2024-11-05 16:01:02.299148] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:30.182 [2024-11-05 16:01:02.299217] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:30.182 [2024-11-05 16:01:02.299281] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:30.182 [2024-11-05 16:01:02.299293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:31:30.182 /dev/nbd0 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:30.182 1+0 records in 00:31:30.182 1+0 records out 00:31:30.182 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255835 s, 16.0 MB/s 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:30.182 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:31:30.440 /dev/nbd1 00:31:30.440 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:30.440 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:30.440 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:31:30.440 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:31:30.440 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:31:30.440 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:31:30.440 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:31:30.440 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:31:30.440 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:31:30.440 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:31:30.440 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:30.440 1+0 records in 00:31:30.440 1+0 records out 00:31:30.440 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287932 s, 14.2 MB/s 00:31:30.440 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:30.440 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:31:30.440 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:30.440 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:31:30.440 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:31:30.440 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:30.440 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:30.440 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:31:30.699 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:31:30.699 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:31:30.699 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:30.699 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:30.699 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:31:30.699 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:30.699 16:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:31:30.699 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:30.699 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:30.699 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:30.699 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:30.699 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:30.699 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:30.699 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:31:30.699 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:31:30.699 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:30.699 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:31:30.957 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:30.957 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:30.957 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:30.957 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:30.957 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:30.957 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:30.957 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:31:30.957 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:31:30.957 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:31:30.957 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:31:30.957 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.957 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:30.957 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.957 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:31:30.957 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.957 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:30.957 [2024-11-05 16:01:03.331893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:30.957 [2024-11-05 16:01:03.332019] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:30.957 [2024-11-05 16:01:03.332040] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:31:30.957 [2024-11-05 16:01:03.332049] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:30.957 [2024-11-05 16:01:03.333793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:30.957 [2024-11-05 16:01:03.333824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:30.957 [2024-11-05 16:01:03.333897] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:30.957 [2024-11-05 16:01:03.333937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:30.957 [2024-11-05 16:01:03.334035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:30.957 [2024-11-05 16:01:03.334109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:30.957 spare 00:31:30.957 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.957 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:31:30.957 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.957 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:31.215 [2024-11-05 16:01:03.434171] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:31:31.215 [2024-11-05 16:01:03.434193] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:31.215 [2024-11-05 16:01:03.434415] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:31:31.215 [2024-11-05 16:01:03.437163] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:31:31.215 [2024-11-05 16:01:03.437179] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:31:31.215 [2024-11-05 16:01:03.437319] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:31.215 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.215 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:31.215 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:31.215 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:31.215 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:31.215 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:31.215 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:31.215 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:31.215 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:31.215 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:31.215 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:31.215 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:31.215 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:31.215 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.215 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:31.215 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.215 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:31.215 "name": "raid_bdev1", 00:31:31.216 "uuid": "c8675a28-df87-4f96-90d5-f4d99a799ddb", 00:31:31.216 "strip_size_kb": 64, 00:31:31.216 "state": "online", 00:31:31.216 "raid_level": "raid5f", 00:31:31.216 "superblock": true, 00:31:31.216 "num_base_bdevs": 3, 00:31:31.216 "num_base_bdevs_discovered": 3, 00:31:31.216 "num_base_bdevs_operational": 3, 00:31:31.216 "base_bdevs_list": [ 00:31:31.216 { 00:31:31.216 "name": "spare", 00:31:31.216 "uuid": "44ad60c6-2bea-5a59-99e1-3b6d481c11a0", 00:31:31.216 "is_configured": true, 00:31:31.216 "data_offset": 2048, 00:31:31.216 "data_size": 63488 00:31:31.216 }, 00:31:31.216 { 00:31:31.216 "name": "BaseBdev2", 00:31:31.216 "uuid": "4bdcc6c6-fb4d-5ed9-baca-074bf74eaeb4", 00:31:31.216 "is_configured": true, 00:31:31.216 "data_offset": 2048, 00:31:31.216 "data_size": 63488 00:31:31.216 }, 00:31:31.216 { 00:31:31.216 "name": "BaseBdev3", 00:31:31.216 "uuid": "412464e9-cd85-51c1-8b8c-ab2cf076ecc7", 00:31:31.216 "is_configured": true, 00:31:31.216 "data_offset": 2048, 00:31:31.216 "data_size": 63488 00:31:31.216 } 00:31:31.216 ] 00:31:31.216 }' 00:31:31.216 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:31.216 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:31.474 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:31.474 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:31.474 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:31.474 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:31.474 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:31.474 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:31.474 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:31.474 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.474 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:31.474 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.474 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:31.474 "name": "raid_bdev1", 00:31:31.474 "uuid": "c8675a28-df87-4f96-90d5-f4d99a799ddb", 00:31:31.474 "strip_size_kb": 64, 00:31:31.474 "state": "online", 00:31:31.474 "raid_level": "raid5f", 00:31:31.474 "superblock": true, 00:31:31.474 "num_base_bdevs": 3, 00:31:31.474 "num_base_bdevs_discovered": 3, 00:31:31.474 "num_base_bdevs_operational": 3, 00:31:31.474 "base_bdevs_list": [ 00:31:31.474 { 00:31:31.474 "name": "spare", 00:31:31.474 "uuid": "44ad60c6-2bea-5a59-99e1-3b6d481c11a0", 00:31:31.474 "is_configured": true, 00:31:31.474 "data_offset": 2048, 00:31:31.474 "data_size": 63488 00:31:31.474 }, 00:31:31.474 { 00:31:31.474 "name": "BaseBdev2", 00:31:31.474 "uuid": "4bdcc6c6-fb4d-5ed9-baca-074bf74eaeb4", 00:31:31.474 "is_configured": true, 00:31:31.474 "data_offset": 2048, 00:31:31.474 "data_size": 63488 00:31:31.474 }, 00:31:31.474 { 00:31:31.474 "name": "BaseBdev3", 00:31:31.474 "uuid": "412464e9-cd85-51c1-8b8c-ab2cf076ecc7", 00:31:31.474 "is_configured": true, 00:31:31.474 "data_offset": 2048, 00:31:31.474 "data_size": 63488 00:31:31.474 } 00:31:31.474 ] 00:31:31.474 }' 00:31:31.474 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:31.474 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:31.474 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:31.474 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:31.474 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:31.474 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:31:31.474 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.474 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:31.474 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.474 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:31:31.474 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:31:31.474 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.474 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:31.474 [2024-11-05 16:01:03.885378] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:31.474 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.732 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:31.732 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:31.732 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:31.732 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:31.732 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:31.732 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:31.732 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:31.732 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:31.732 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:31.732 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:31.732 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:31.732 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:31.732 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.732 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:31.732 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.732 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:31.732 "name": "raid_bdev1", 00:31:31.732 "uuid": "c8675a28-df87-4f96-90d5-f4d99a799ddb", 00:31:31.732 "strip_size_kb": 64, 00:31:31.732 "state": "online", 00:31:31.732 "raid_level": "raid5f", 00:31:31.732 "superblock": true, 00:31:31.732 "num_base_bdevs": 3, 00:31:31.732 "num_base_bdevs_discovered": 2, 00:31:31.732 "num_base_bdevs_operational": 2, 00:31:31.732 "base_bdevs_list": [ 00:31:31.732 { 00:31:31.732 "name": null, 00:31:31.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:31.732 "is_configured": false, 00:31:31.732 "data_offset": 0, 00:31:31.732 "data_size": 63488 00:31:31.732 }, 00:31:31.732 { 00:31:31.732 "name": "BaseBdev2", 00:31:31.732 "uuid": "4bdcc6c6-fb4d-5ed9-baca-074bf74eaeb4", 00:31:31.732 "is_configured": true, 00:31:31.732 "data_offset": 2048, 00:31:31.732 "data_size": 63488 00:31:31.732 }, 00:31:31.732 { 00:31:31.732 "name": "BaseBdev3", 00:31:31.732 "uuid": "412464e9-cd85-51c1-8b8c-ab2cf076ecc7", 00:31:31.732 "is_configured": true, 00:31:31.732 "data_offset": 2048, 00:31:31.732 "data_size": 63488 00:31:31.732 } 00:31:31.732 ] 00:31:31.732 }' 00:31:31.732 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:31.732 16:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:31.990 16:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:31:31.990 16:01:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.990 16:01:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:31.990 [2024-11-05 16:01:04.205455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:31.990 [2024-11-05 16:01:04.205591] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:31:31.990 [2024-11-05 16:01:04.205604] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:31.990 [2024-11-05 16:01:04.205635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:31.990 [2024-11-05 16:01:04.213539] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:31:31.990 16:01:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.990 16:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:31:31.990 [2024-11-05 16:01:04.217820] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:32.935 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:32.935 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:32.935 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:32.935 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:32.935 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:32.935 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:32.935 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:32.935 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.935 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:32.935 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.935 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:32.935 "name": "raid_bdev1", 00:31:32.935 "uuid": "c8675a28-df87-4f96-90d5-f4d99a799ddb", 00:31:32.935 "strip_size_kb": 64, 00:31:32.935 "state": "online", 00:31:32.935 "raid_level": "raid5f", 00:31:32.935 "superblock": true, 00:31:32.935 "num_base_bdevs": 3, 00:31:32.935 "num_base_bdevs_discovered": 3, 00:31:32.935 "num_base_bdevs_operational": 3, 00:31:32.935 "process": { 00:31:32.935 "type": "rebuild", 00:31:32.935 "target": "spare", 00:31:32.935 "progress": { 00:31:32.935 "blocks": 20480, 00:31:32.935 "percent": 16 00:31:32.935 } 00:31:32.935 }, 00:31:32.935 "base_bdevs_list": [ 00:31:32.935 { 00:31:32.935 "name": "spare", 00:31:32.935 "uuid": "44ad60c6-2bea-5a59-99e1-3b6d481c11a0", 00:31:32.935 "is_configured": true, 00:31:32.935 "data_offset": 2048, 00:31:32.935 "data_size": 63488 00:31:32.935 }, 00:31:32.935 { 00:31:32.935 "name": "BaseBdev2", 00:31:32.935 "uuid": "4bdcc6c6-fb4d-5ed9-baca-074bf74eaeb4", 00:31:32.935 "is_configured": true, 00:31:32.935 "data_offset": 2048, 00:31:32.935 "data_size": 63488 00:31:32.935 }, 00:31:32.935 { 00:31:32.935 "name": "BaseBdev3", 00:31:32.935 "uuid": "412464e9-cd85-51c1-8b8c-ab2cf076ecc7", 00:31:32.935 "is_configured": true, 00:31:32.935 "data_offset": 2048, 00:31:32.935 "data_size": 63488 00:31:32.935 } 00:31:32.935 ] 00:31:32.935 }' 00:31:32.935 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:32.935 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:32.935 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:32.935 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:32.935 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:31:32.935 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.935 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:32.935 [2024-11-05 16:01:05.322495] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:32.935 [2024-11-05 16:01:05.326020] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:32.935 [2024-11-05 16:01:05.326068] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:32.935 [2024-11-05 16:01:05.326081] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:32.935 [2024-11-05 16:01:05.326088] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:33.194 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.194 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:33.194 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:33.194 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:33.194 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:33.194 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:33.194 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:33.194 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:33.194 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:33.194 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:33.194 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:33.194 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:33.194 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.194 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:33.194 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:33.194 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.194 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:33.194 "name": "raid_bdev1", 00:31:33.194 "uuid": "c8675a28-df87-4f96-90d5-f4d99a799ddb", 00:31:33.194 "strip_size_kb": 64, 00:31:33.194 "state": "online", 00:31:33.194 "raid_level": "raid5f", 00:31:33.194 "superblock": true, 00:31:33.194 "num_base_bdevs": 3, 00:31:33.194 "num_base_bdevs_discovered": 2, 00:31:33.194 "num_base_bdevs_operational": 2, 00:31:33.194 "base_bdevs_list": [ 00:31:33.194 { 00:31:33.194 "name": null, 00:31:33.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:33.194 "is_configured": false, 00:31:33.194 "data_offset": 0, 00:31:33.194 "data_size": 63488 00:31:33.194 }, 00:31:33.194 { 00:31:33.194 "name": "BaseBdev2", 00:31:33.194 "uuid": "4bdcc6c6-fb4d-5ed9-baca-074bf74eaeb4", 00:31:33.194 "is_configured": true, 00:31:33.194 "data_offset": 2048, 00:31:33.194 "data_size": 63488 00:31:33.194 }, 00:31:33.194 { 00:31:33.194 "name": "BaseBdev3", 00:31:33.194 "uuid": "412464e9-cd85-51c1-8b8c-ab2cf076ecc7", 00:31:33.194 "is_configured": true, 00:31:33.194 "data_offset": 2048, 00:31:33.194 "data_size": 63488 00:31:33.194 } 00:31:33.194 ] 00:31:33.194 }' 00:31:33.194 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:33.194 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:33.452 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:31:33.452 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.452 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:33.452 [2024-11-05 16:01:05.659709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:33.452 [2024-11-05 16:01:05.659760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:33.452 [2024-11-05 16:01:05.659776] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:31:33.452 [2024-11-05 16:01:05.659790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:33.452 [2024-11-05 16:01:05.660154] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:33.452 [2024-11-05 16:01:05.660169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:33.452 [2024-11-05 16:01:05.660239] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:33.452 [2024-11-05 16:01:05.660250] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:31:33.452 [2024-11-05 16:01:05.660257] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:33.452 [2024-11-05 16:01:05.660275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:33.452 [2024-11-05 16:01:05.668190] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:31:33.452 spare 00:31:33.452 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.452 16:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:31:33.452 [2024-11-05 16:01:05.672336] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:34.386 16:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:34.386 16:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:34.386 16:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:34.386 16:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:34.386 16:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:34.386 16:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:34.386 16:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:34.386 16:01:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.386 16:01:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:34.386 16:01:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.386 16:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:34.386 "name": "raid_bdev1", 00:31:34.386 "uuid": "c8675a28-df87-4f96-90d5-f4d99a799ddb", 00:31:34.386 "strip_size_kb": 64, 00:31:34.386 "state": "online", 00:31:34.386 "raid_level": "raid5f", 00:31:34.386 "superblock": true, 00:31:34.386 "num_base_bdevs": 3, 00:31:34.386 "num_base_bdevs_discovered": 3, 00:31:34.386 "num_base_bdevs_operational": 3, 00:31:34.386 "process": { 00:31:34.386 "type": "rebuild", 00:31:34.386 "target": "spare", 00:31:34.386 "progress": { 00:31:34.386 "blocks": 20480, 00:31:34.386 "percent": 16 00:31:34.386 } 00:31:34.386 }, 00:31:34.386 "base_bdevs_list": [ 00:31:34.386 { 00:31:34.386 "name": "spare", 00:31:34.386 "uuid": "44ad60c6-2bea-5a59-99e1-3b6d481c11a0", 00:31:34.386 "is_configured": true, 00:31:34.386 "data_offset": 2048, 00:31:34.386 "data_size": 63488 00:31:34.386 }, 00:31:34.386 { 00:31:34.386 "name": "BaseBdev2", 00:31:34.386 "uuid": "4bdcc6c6-fb4d-5ed9-baca-074bf74eaeb4", 00:31:34.386 "is_configured": true, 00:31:34.386 "data_offset": 2048, 00:31:34.386 "data_size": 63488 00:31:34.386 }, 00:31:34.386 { 00:31:34.386 "name": "BaseBdev3", 00:31:34.386 "uuid": "412464e9-cd85-51c1-8b8c-ab2cf076ecc7", 00:31:34.386 "is_configured": true, 00:31:34.386 "data_offset": 2048, 00:31:34.386 "data_size": 63488 00:31:34.386 } 00:31:34.386 ] 00:31:34.386 }' 00:31:34.386 16:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:34.386 16:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:34.386 16:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:34.386 16:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:34.386 16:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:31:34.386 16:01:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.386 16:01:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:34.386 [2024-11-05 16:01:06.777341] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:34.386 [2024-11-05 16:01:06.780098] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:34.386 [2024-11-05 16:01:06.780220] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:34.386 [2024-11-05 16:01:06.780319] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:34.386 [2024-11-05 16:01:06.780338] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:34.386 16:01:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.386 16:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:34.386 16:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:34.386 16:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:34.386 16:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:34.386 16:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:34.386 16:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:34.386 16:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:34.386 16:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:34.386 16:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:34.386 16:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:34.644 16:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:34.644 16:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:34.644 16:01:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.644 16:01:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:34.644 16:01:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.644 16:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:34.644 "name": "raid_bdev1", 00:31:34.644 "uuid": "c8675a28-df87-4f96-90d5-f4d99a799ddb", 00:31:34.644 "strip_size_kb": 64, 00:31:34.644 "state": "online", 00:31:34.644 "raid_level": "raid5f", 00:31:34.644 "superblock": true, 00:31:34.644 "num_base_bdevs": 3, 00:31:34.644 "num_base_bdevs_discovered": 2, 00:31:34.644 "num_base_bdevs_operational": 2, 00:31:34.644 "base_bdevs_list": [ 00:31:34.644 { 00:31:34.644 "name": null, 00:31:34.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:34.644 "is_configured": false, 00:31:34.644 "data_offset": 0, 00:31:34.644 "data_size": 63488 00:31:34.644 }, 00:31:34.644 { 00:31:34.644 "name": "BaseBdev2", 00:31:34.644 "uuid": "4bdcc6c6-fb4d-5ed9-baca-074bf74eaeb4", 00:31:34.644 "is_configured": true, 00:31:34.644 "data_offset": 2048, 00:31:34.644 "data_size": 63488 00:31:34.644 }, 00:31:34.644 { 00:31:34.644 "name": "BaseBdev3", 00:31:34.644 "uuid": "412464e9-cd85-51c1-8b8c-ab2cf076ecc7", 00:31:34.644 "is_configured": true, 00:31:34.644 "data_offset": 2048, 00:31:34.644 "data_size": 63488 00:31:34.644 } 00:31:34.644 ] 00:31:34.644 }' 00:31:34.644 16:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:34.644 16:01:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:34.903 16:01:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:34.903 16:01:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:34.903 16:01:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:34.903 16:01:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:34.903 16:01:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:34.903 16:01:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:34.903 16:01:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:34.903 16:01:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.903 16:01:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:34.903 16:01:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.903 16:01:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:34.903 "name": "raid_bdev1", 00:31:34.903 "uuid": "c8675a28-df87-4f96-90d5-f4d99a799ddb", 00:31:34.903 "strip_size_kb": 64, 00:31:34.903 "state": "online", 00:31:34.903 "raid_level": "raid5f", 00:31:34.903 "superblock": true, 00:31:34.903 "num_base_bdevs": 3, 00:31:34.903 "num_base_bdevs_discovered": 2, 00:31:34.903 "num_base_bdevs_operational": 2, 00:31:34.903 "base_bdevs_list": [ 00:31:34.903 { 00:31:34.903 "name": null, 00:31:34.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:34.903 "is_configured": false, 00:31:34.903 "data_offset": 0, 00:31:34.903 "data_size": 63488 00:31:34.903 }, 00:31:34.903 { 00:31:34.903 "name": "BaseBdev2", 00:31:34.903 "uuid": "4bdcc6c6-fb4d-5ed9-baca-074bf74eaeb4", 00:31:34.903 "is_configured": true, 00:31:34.903 "data_offset": 2048, 00:31:34.903 "data_size": 63488 00:31:34.903 }, 00:31:34.903 { 00:31:34.903 "name": "BaseBdev3", 00:31:34.903 "uuid": "412464e9-cd85-51c1-8b8c-ab2cf076ecc7", 00:31:34.903 "is_configured": true, 00:31:34.903 "data_offset": 2048, 00:31:34.903 "data_size": 63488 00:31:34.903 } 00:31:34.903 ] 00:31:34.903 }' 00:31:34.903 16:01:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:34.903 16:01:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:34.903 16:01:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:34.903 16:01:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:34.903 16:01:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:31:34.903 16:01:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.903 16:01:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:34.903 16:01:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.904 16:01:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:34.904 16:01:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.904 16:01:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:34.904 [2024-11-05 16:01:07.230050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:34.904 [2024-11-05 16:01:07.230163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:34.904 [2024-11-05 16:01:07.230185] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:31:34.904 [2024-11-05 16:01:07.230193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:34.904 [2024-11-05 16:01:07.230530] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:34.904 [2024-11-05 16:01:07.230541] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:34.904 [2024-11-05 16:01:07.230613] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:31:34.904 [2024-11-05 16:01:07.230628] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:31:34.904 [2024-11-05 16:01:07.230636] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:34.904 [2024-11-05 16:01:07.230643] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:31:34.904 BaseBdev1 00:31:34.904 16:01:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.904 16:01:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:31:35.838 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:35.838 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:35.838 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:35.838 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:35.838 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:35.838 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:35.838 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:35.838 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:35.838 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:35.838 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:35.838 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:35.838 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.838 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:35.838 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.096 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.096 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:36.096 "name": "raid_bdev1", 00:31:36.096 "uuid": "c8675a28-df87-4f96-90d5-f4d99a799ddb", 00:31:36.096 "strip_size_kb": 64, 00:31:36.096 "state": "online", 00:31:36.096 "raid_level": "raid5f", 00:31:36.096 "superblock": true, 00:31:36.096 "num_base_bdevs": 3, 00:31:36.096 "num_base_bdevs_discovered": 2, 00:31:36.096 "num_base_bdevs_operational": 2, 00:31:36.096 "base_bdevs_list": [ 00:31:36.096 { 00:31:36.096 "name": null, 00:31:36.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:36.096 "is_configured": false, 00:31:36.096 "data_offset": 0, 00:31:36.096 "data_size": 63488 00:31:36.096 }, 00:31:36.096 { 00:31:36.096 "name": "BaseBdev2", 00:31:36.096 "uuid": "4bdcc6c6-fb4d-5ed9-baca-074bf74eaeb4", 00:31:36.096 "is_configured": true, 00:31:36.096 "data_offset": 2048, 00:31:36.096 "data_size": 63488 00:31:36.096 }, 00:31:36.096 { 00:31:36.096 "name": "BaseBdev3", 00:31:36.096 "uuid": "412464e9-cd85-51c1-8b8c-ab2cf076ecc7", 00:31:36.096 "is_configured": true, 00:31:36.096 "data_offset": 2048, 00:31:36.096 "data_size": 63488 00:31:36.096 } 00:31:36.096 ] 00:31:36.096 }' 00:31:36.096 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:36.096 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.354 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:36.354 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:36.354 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:36.354 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:36.354 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:36.354 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:36.354 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.354 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.354 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:36.354 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.354 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:36.354 "name": "raid_bdev1", 00:31:36.354 "uuid": "c8675a28-df87-4f96-90d5-f4d99a799ddb", 00:31:36.354 "strip_size_kb": 64, 00:31:36.354 "state": "online", 00:31:36.354 "raid_level": "raid5f", 00:31:36.354 "superblock": true, 00:31:36.354 "num_base_bdevs": 3, 00:31:36.354 "num_base_bdevs_discovered": 2, 00:31:36.354 "num_base_bdevs_operational": 2, 00:31:36.354 "base_bdevs_list": [ 00:31:36.354 { 00:31:36.354 "name": null, 00:31:36.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:36.354 "is_configured": false, 00:31:36.354 "data_offset": 0, 00:31:36.354 "data_size": 63488 00:31:36.354 }, 00:31:36.354 { 00:31:36.354 "name": "BaseBdev2", 00:31:36.354 "uuid": "4bdcc6c6-fb4d-5ed9-baca-074bf74eaeb4", 00:31:36.354 "is_configured": true, 00:31:36.354 "data_offset": 2048, 00:31:36.354 "data_size": 63488 00:31:36.354 }, 00:31:36.354 { 00:31:36.354 "name": "BaseBdev3", 00:31:36.354 "uuid": "412464e9-cd85-51c1-8b8c-ab2cf076ecc7", 00:31:36.354 "is_configured": true, 00:31:36.354 "data_offset": 2048, 00:31:36.354 "data_size": 63488 00:31:36.354 } 00:31:36.354 ] 00:31:36.354 }' 00:31:36.354 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:36.354 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:36.354 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:36.354 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:36.354 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:36.354 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:31:36.354 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:36.354 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:36.354 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:36.354 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:36.354 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:36.354 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:36.354 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.354 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.354 [2024-11-05 16:01:08.662353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:36.354 [2024-11-05 16:01:08.662467] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:31:36.354 [2024-11-05 16:01:08.662479] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:36.354 request: 00:31:36.354 { 00:31:36.354 "base_bdev": "BaseBdev1", 00:31:36.354 "raid_bdev": "raid_bdev1", 00:31:36.354 "method": "bdev_raid_add_base_bdev", 00:31:36.354 "req_id": 1 00:31:36.354 } 00:31:36.354 Got JSON-RPC error response 00:31:36.354 response: 00:31:36.354 { 00:31:36.354 "code": -22, 00:31:36.354 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:31:36.354 } 00:31:36.354 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:36.354 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:31:36.354 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:36.354 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:36.354 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:36.354 16:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:31:37.359 16:01:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:37.359 16:01:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:37.360 16:01:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:37.360 16:01:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:37.360 16:01:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:37.360 16:01:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:37.360 16:01:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:37.360 16:01:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:37.360 16:01:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:37.360 16:01:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:37.360 16:01:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:37.360 16:01:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:37.360 16:01:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.360 16:01:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:37.360 16:01:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.360 16:01:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:37.360 "name": "raid_bdev1", 00:31:37.360 "uuid": "c8675a28-df87-4f96-90d5-f4d99a799ddb", 00:31:37.360 "strip_size_kb": 64, 00:31:37.360 "state": "online", 00:31:37.360 "raid_level": "raid5f", 00:31:37.360 "superblock": true, 00:31:37.360 "num_base_bdevs": 3, 00:31:37.360 "num_base_bdevs_discovered": 2, 00:31:37.360 "num_base_bdevs_operational": 2, 00:31:37.360 "base_bdevs_list": [ 00:31:37.360 { 00:31:37.360 "name": null, 00:31:37.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:37.360 "is_configured": false, 00:31:37.360 "data_offset": 0, 00:31:37.360 "data_size": 63488 00:31:37.360 }, 00:31:37.360 { 00:31:37.360 "name": "BaseBdev2", 00:31:37.360 "uuid": "4bdcc6c6-fb4d-5ed9-baca-074bf74eaeb4", 00:31:37.360 "is_configured": true, 00:31:37.360 "data_offset": 2048, 00:31:37.360 "data_size": 63488 00:31:37.360 }, 00:31:37.360 { 00:31:37.360 "name": "BaseBdev3", 00:31:37.360 "uuid": "412464e9-cd85-51c1-8b8c-ab2cf076ecc7", 00:31:37.360 "is_configured": true, 00:31:37.360 "data_offset": 2048, 00:31:37.360 "data_size": 63488 00:31:37.360 } 00:31:37.360 ] 00:31:37.360 }' 00:31:37.360 16:01:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:37.360 16:01:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:37.621 16:01:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:37.621 16:01:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:37.621 16:01:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:37.621 16:01:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:37.621 16:01:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:37.621 16:01:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:37.621 16:01:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:37.621 16:01:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.621 16:01:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:37.621 16:01:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.621 16:01:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:37.621 "name": "raid_bdev1", 00:31:37.621 "uuid": "c8675a28-df87-4f96-90d5-f4d99a799ddb", 00:31:37.621 "strip_size_kb": 64, 00:31:37.621 "state": "online", 00:31:37.621 "raid_level": "raid5f", 00:31:37.621 "superblock": true, 00:31:37.621 "num_base_bdevs": 3, 00:31:37.621 "num_base_bdevs_discovered": 2, 00:31:37.621 "num_base_bdevs_operational": 2, 00:31:37.621 "base_bdevs_list": [ 00:31:37.621 { 00:31:37.621 "name": null, 00:31:37.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:37.621 "is_configured": false, 00:31:37.621 "data_offset": 0, 00:31:37.621 "data_size": 63488 00:31:37.621 }, 00:31:37.621 { 00:31:37.621 "name": "BaseBdev2", 00:31:37.621 "uuid": "4bdcc6c6-fb4d-5ed9-baca-074bf74eaeb4", 00:31:37.621 "is_configured": true, 00:31:37.621 "data_offset": 2048, 00:31:37.621 "data_size": 63488 00:31:37.621 }, 00:31:37.621 { 00:31:37.621 "name": "BaseBdev3", 00:31:37.621 "uuid": "412464e9-cd85-51c1-8b8c-ab2cf076ecc7", 00:31:37.621 "is_configured": true, 00:31:37.621 "data_offset": 2048, 00:31:37.621 "data_size": 63488 00:31:37.621 } 00:31:37.621 ] 00:31:37.621 }' 00:31:37.621 16:01:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:37.879 16:01:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:37.879 16:01:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:37.879 16:01:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:37.879 16:01:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 79426 00:31:37.879 16:01:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 79426 ']' 00:31:37.879 16:01:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 79426 00:31:37.879 16:01:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:31:37.879 16:01:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:37.879 16:01:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79426 00:31:37.879 killing process with pid 79426 00:31:37.879 Received shutdown signal, test time was about 60.000000 seconds 00:31:37.879 00:31:37.879 Latency(us) 00:31:37.879 [2024-11-05T16:01:10.294Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:37.879 [2024-11-05T16:01:10.294Z] =================================================================================================================== 00:31:37.879 [2024-11-05T16:01:10.294Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:37.879 16:01:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:37.879 16:01:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:37.879 16:01:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79426' 00:31:37.879 16:01:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 79426 00:31:37.879 [2024-11-05 16:01:10.109175] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:37.879 16:01:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 79426 00:31:37.879 [2024-11-05 16:01:10.109267] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:37.879 [2024-11-05 16:01:10.109318] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:37.879 [2024-11-05 16:01:10.109328] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:31:38.137 [2024-11-05 16:01:10.298067] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:38.703 16:01:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:31:38.703 00:31:38.703 real 0m19.760s 00:31:38.703 user 0m24.743s 00:31:38.703 sys 0m1.897s 00:31:38.703 16:01:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:38.703 ************************************ 00:31:38.703 END TEST raid5f_rebuild_test_sb 00:31:38.703 16:01:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:38.703 ************************************ 00:31:38.703 16:01:10 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:31:38.703 16:01:10 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:31:38.703 16:01:10 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:31:38.703 16:01:10 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:38.703 16:01:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:38.703 ************************************ 00:31:38.703 START TEST raid5f_state_function_test 00:31:38.703 ************************************ 00:31:38.703 16:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 false 00:31:38.703 16:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:31:38.703 16:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:31:38.703 16:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:31:38.703 16:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:31:38.703 16:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:31:38.703 16:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:38.703 16:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:31:38.703 16:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:38.703 16:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:38.703 16:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:31:38.703 16:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:38.703 16:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:38.703 16:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:31:38.703 16:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:38.703 16:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:38.703 16:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:31:38.703 16:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:38.703 16:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:38.703 16:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:31:38.703 16:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:31:38.703 16:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:31:38.703 16:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:31:38.703 16:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:31:38.703 16:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:31:38.703 16:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:31:38.703 16:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:31:38.703 16:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:31:38.703 Process raid pid: 80146 00:31:38.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:38.703 16:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:31:38.703 16:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:31:38.703 16:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80146 00:31:38.703 16:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80146' 00:31:38.704 16:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80146 00:31:38.704 16:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 80146 ']' 00:31:38.704 16:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:38.704 16:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:38.704 16:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:38.704 16:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:38.704 16:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:31:38.704 16:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:38.704 [2024-11-05 16:01:10.944991] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:31:38.704 [2024-11-05 16:01:10.945083] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:38.704 [2024-11-05 16:01:11.094790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:38.961 [2024-11-05 16:01:11.174965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:38.961 [2024-11-05 16:01:11.282123] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:38.961 [2024-11-05 16:01:11.282146] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:39.538 16:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:39.538 16:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:31:39.538 16:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:31:39.538 16:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.538 16:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.538 [2024-11-05 16:01:11.758120] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:39.539 [2024-11-05 16:01:11.758160] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:39.539 [2024-11-05 16:01:11.758168] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:39.539 [2024-11-05 16:01:11.758175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:39.539 [2024-11-05 16:01:11.758180] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:39.539 [2024-11-05 16:01:11.758187] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:39.539 [2024-11-05 16:01:11.758192] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:31:39.539 [2024-11-05 16:01:11.758198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:31:39.539 16:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.539 16:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:31:39.539 16:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:39.539 16:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:39.539 16:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:39.539 16:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:39.539 16:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:39.539 16:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:39.539 16:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:39.539 16:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:39.539 16:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:39.539 16:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:39.539 16:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:39.539 16:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.539 16:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.539 16:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.539 16:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:39.539 "name": "Existed_Raid", 00:31:39.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:39.539 "strip_size_kb": 64, 00:31:39.539 "state": "configuring", 00:31:39.539 "raid_level": "raid5f", 00:31:39.539 "superblock": false, 00:31:39.539 "num_base_bdevs": 4, 00:31:39.539 "num_base_bdevs_discovered": 0, 00:31:39.539 "num_base_bdevs_operational": 4, 00:31:39.539 "base_bdevs_list": [ 00:31:39.539 { 00:31:39.539 "name": "BaseBdev1", 00:31:39.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:39.539 "is_configured": false, 00:31:39.539 "data_offset": 0, 00:31:39.539 "data_size": 0 00:31:39.539 }, 00:31:39.539 { 00:31:39.539 "name": "BaseBdev2", 00:31:39.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:39.539 "is_configured": false, 00:31:39.539 "data_offset": 0, 00:31:39.539 "data_size": 0 00:31:39.539 }, 00:31:39.539 { 00:31:39.539 "name": "BaseBdev3", 00:31:39.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:39.539 "is_configured": false, 00:31:39.539 "data_offset": 0, 00:31:39.539 "data_size": 0 00:31:39.539 }, 00:31:39.539 { 00:31:39.539 "name": "BaseBdev4", 00:31:39.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:39.539 "is_configured": false, 00:31:39.539 "data_offset": 0, 00:31:39.539 "data_size": 0 00:31:39.539 } 00:31:39.539 ] 00:31:39.539 }' 00:31:39.539 16:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:39.539 16:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.798 [2024-11-05 16:01:12.086146] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:39.798 [2024-11-05 16:01:12.086175] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.798 [2024-11-05 16:01:12.094149] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:39.798 [2024-11-05 16:01:12.094176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:39.798 [2024-11-05 16:01:12.094183] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:39.798 [2024-11-05 16:01:12.094190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:39.798 [2024-11-05 16:01:12.094195] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:39.798 [2024-11-05 16:01:12.094202] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:39.798 [2024-11-05 16:01:12.094206] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:31:39.798 [2024-11-05 16:01:12.094213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.798 [2024-11-05 16:01:12.121245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:39.798 BaseBdev1 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.798 [ 00:31:39.798 { 00:31:39.798 "name": "BaseBdev1", 00:31:39.798 "aliases": [ 00:31:39.798 "cab33f24-6e89-4eb9-a885-d65c196bafb3" 00:31:39.798 ], 00:31:39.798 "product_name": "Malloc disk", 00:31:39.798 "block_size": 512, 00:31:39.798 "num_blocks": 65536, 00:31:39.798 "uuid": "cab33f24-6e89-4eb9-a885-d65c196bafb3", 00:31:39.798 "assigned_rate_limits": { 00:31:39.798 "rw_ios_per_sec": 0, 00:31:39.798 "rw_mbytes_per_sec": 0, 00:31:39.798 "r_mbytes_per_sec": 0, 00:31:39.798 "w_mbytes_per_sec": 0 00:31:39.798 }, 00:31:39.798 "claimed": true, 00:31:39.798 "claim_type": "exclusive_write", 00:31:39.798 "zoned": false, 00:31:39.798 "supported_io_types": { 00:31:39.798 "read": true, 00:31:39.798 "write": true, 00:31:39.798 "unmap": true, 00:31:39.798 "flush": true, 00:31:39.798 "reset": true, 00:31:39.798 "nvme_admin": false, 00:31:39.798 "nvme_io": false, 00:31:39.798 "nvme_io_md": false, 00:31:39.798 "write_zeroes": true, 00:31:39.798 "zcopy": true, 00:31:39.798 "get_zone_info": false, 00:31:39.798 "zone_management": false, 00:31:39.798 "zone_append": false, 00:31:39.798 "compare": false, 00:31:39.798 "compare_and_write": false, 00:31:39.798 "abort": true, 00:31:39.798 "seek_hole": false, 00:31:39.798 "seek_data": false, 00:31:39.798 "copy": true, 00:31:39.798 "nvme_iov_md": false 00:31:39.798 }, 00:31:39.798 "memory_domains": [ 00:31:39.798 { 00:31:39.798 "dma_device_id": "system", 00:31:39.798 "dma_device_type": 1 00:31:39.798 }, 00:31:39.798 { 00:31:39.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:39.798 "dma_device_type": 2 00:31:39.798 } 00:31:39.798 ], 00:31:39.798 "driver_specific": {} 00:31:39.798 } 00:31:39.798 ] 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.798 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:39.798 "name": "Existed_Raid", 00:31:39.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:39.798 "strip_size_kb": 64, 00:31:39.798 "state": "configuring", 00:31:39.798 "raid_level": "raid5f", 00:31:39.798 "superblock": false, 00:31:39.798 "num_base_bdevs": 4, 00:31:39.798 "num_base_bdevs_discovered": 1, 00:31:39.798 "num_base_bdevs_operational": 4, 00:31:39.798 "base_bdevs_list": [ 00:31:39.798 { 00:31:39.798 "name": "BaseBdev1", 00:31:39.798 "uuid": "cab33f24-6e89-4eb9-a885-d65c196bafb3", 00:31:39.798 "is_configured": true, 00:31:39.798 "data_offset": 0, 00:31:39.798 "data_size": 65536 00:31:39.798 }, 00:31:39.798 { 00:31:39.798 "name": "BaseBdev2", 00:31:39.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:39.799 "is_configured": false, 00:31:39.799 "data_offset": 0, 00:31:39.799 "data_size": 0 00:31:39.799 }, 00:31:39.799 { 00:31:39.799 "name": "BaseBdev3", 00:31:39.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:39.799 "is_configured": false, 00:31:39.799 "data_offset": 0, 00:31:39.799 "data_size": 0 00:31:39.799 }, 00:31:39.799 { 00:31:39.799 "name": "BaseBdev4", 00:31:39.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:39.799 "is_configured": false, 00:31:39.799 "data_offset": 0, 00:31:39.799 "data_size": 0 00:31:39.799 } 00:31:39.799 ] 00:31:39.799 }' 00:31:39.799 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:39.799 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:40.057 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:40.057 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.057 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:40.057 [2024-11-05 16:01:12.461324] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:40.057 [2024-11-05 16:01:12.461366] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:31:40.057 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.057 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:31:40.057 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.057 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:40.057 [2024-11-05 16:01:12.469371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:40.057 [2024-11-05 16:01:12.470806] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:40.057 [2024-11-05 16:01:12.470850] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:40.057 [2024-11-05 16:01:12.470858] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:40.057 [2024-11-05 16:01:12.470866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:40.057 [2024-11-05 16:01:12.470872] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:31:40.057 [2024-11-05 16:01:12.470878] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:31:40.318 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.318 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:31:40.318 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:40.318 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:31:40.318 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:40.318 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:40.318 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:40.318 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:40.318 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:40.318 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:40.318 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:40.318 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:40.318 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:40.318 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:40.318 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.318 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:40.318 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:40.318 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.318 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:40.318 "name": "Existed_Raid", 00:31:40.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:40.318 "strip_size_kb": 64, 00:31:40.318 "state": "configuring", 00:31:40.318 "raid_level": "raid5f", 00:31:40.318 "superblock": false, 00:31:40.318 "num_base_bdevs": 4, 00:31:40.318 "num_base_bdevs_discovered": 1, 00:31:40.318 "num_base_bdevs_operational": 4, 00:31:40.318 "base_bdevs_list": [ 00:31:40.318 { 00:31:40.318 "name": "BaseBdev1", 00:31:40.318 "uuid": "cab33f24-6e89-4eb9-a885-d65c196bafb3", 00:31:40.318 "is_configured": true, 00:31:40.318 "data_offset": 0, 00:31:40.318 "data_size": 65536 00:31:40.318 }, 00:31:40.318 { 00:31:40.318 "name": "BaseBdev2", 00:31:40.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:40.318 "is_configured": false, 00:31:40.318 "data_offset": 0, 00:31:40.318 "data_size": 0 00:31:40.318 }, 00:31:40.318 { 00:31:40.318 "name": "BaseBdev3", 00:31:40.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:40.318 "is_configured": false, 00:31:40.318 "data_offset": 0, 00:31:40.318 "data_size": 0 00:31:40.318 }, 00:31:40.318 { 00:31:40.318 "name": "BaseBdev4", 00:31:40.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:40.318 "is_configured": false, 00:31:40.318 "data_offset": 0, 00:31:40.318 "data_size": 0 00:31:40.318 } 00:31:40.318 ] 00:31:40.318 }' 00:31:40.319 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:40.319 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:40.581 [2024-11-05 16:01:12.795487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:40.581 BaseBdev2 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:40.581 [ 00:31:40.581 { 00:31:40.581 "name": "BaseBdev2", 00:31:40.581 "aliases": [ 00:31:40.581 "78a32a73-39d0-4f73-98ef-c454740bf1b5" 00:31:40.581 ], 00:31:40.581 "product_name": "Malloc disk", 00:31:40.581 "block_size": 512, 00:31:40.581 "num_blocks": 65536, 00:31:40.581 "uuid": "78a32a73-39d0-4f73-98ef-c454740bf1b5", 00:31:40.581 "assigned_rate_limits": { 00:31:40.581 "rw_ios_per_sec": 0, 00:31:40.581 "rw_mbytes_per_sec": 0, 00:31:40.581 "r_mbytes_per_sec": 0, 00:31:40.581 "w_mbytes_per_sec": 0 00:31:40.581 }, 00:31:40.581 "claimed": true, 00:31:40.581 "claim_type": "exclusive_write", 00:31:40.581 "zoned": false, 00:31:40.581 "supported_io_types": { 00:31:40.581 "read": true, 00:31:40.581 "write": true, 00:31:40.581 "unmap": true, 00:31:40.581 "flush": true, 00:31:40.581 "reset": true, 00:31:40.581 "nvme_admin": false, 00:31:40.581 "nvme_io": false, 00:31:40.581 "nvme_io_md": false, 00:31:40.581 "write_zeroes": true, 00:31:40.581 "zcopy": true, 00:31:40.581 "get_zone_info": false, 00:31:40.581 "zone_management": false, 00:31:40.581 "zone_append": false, 00:31:40.581 "compare": false, 00:31:40.581 "compare_and_write": false, 00:31:40.581 "abort": true, 00:31:40.581 "seek_hole": false, 00:31:40.581 "seek_data": false, 00:31:40.581 "copy": true, 00:31:40.581 "nvme_iov_md": false 00:31:40.581 }, 00:31:40.581 "memory_domains": [ 00:31:40.581 { 00:31:40.581 "dma_device_id": "system", 00:31:40.581 "dma_device_type": 1 00:31:40.581 }, 00:31:40.581 { 00:31:40.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:40.581 "dma_device_type": 2 00:31:40.581 } 00:31:40.581 ], 00:31:40.581 "driver_specific": {} 00:31:40.581 } 00:31:40.581 ] 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:40.581 "name": "Existed_Raid", 00:31:40.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:40.581 "strip_size_kb": 64, 00:31:40.581 "state": "configuring", 00:31:40.581 "raid_level": "raid5f", 00:31:40.581 "superblock": false, 00:31:40.581 "num_base_bdevs": 4, 00:31:40.581 "num_base_bdevs_discovered": 2, 00:31:40.581 "num_base_bdevs_operational": 4, 00:31:40.581 "base_bdevs_list": [ 00:31:40.581 { 00:31:40.581 "name": "BaseBdev1", 00:31:40.581 "uuid": "cab33f24-6e89-4eb9-a885-d65c196bafb3", 00:31:40.581 "is_configured": true, 00:31:40.581 "data_offset": 0, 00:31:40.581 "data_size": 65536 00:31:40.581 }, 00:31:40.581 { 00:31:40.581 "name": "BaseBdev2", 00:31:40.581 "uuid": "78a32a73-39d0-4f73-98ef-c454740bf1b5", 00:31:40.581 "is_configured": true, 00:31:40.581 "data_offset": 0, 00:31:40.581 "data_size": 65536 00:31:40.581 }, 00:31:40.581 { 00:31:40.581 "name": "BaseBdev3", 00:31:40.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:40.581 "is_configured": false, 00:31:40.581 "data_offset": 0, 00:31:40.581 "data_size": 0 00:31:40.581 }, 00:31:40.581 { 00:31:40.581 "name": "BaseBdev4", 00:31:40.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:40.581 "is_configured": false, 00:31:40.581 "data_offset": 0, 00:31:40.581 "data_size": 0 00:31:40.581 } 00:31:40.581 ] 00:31:40.581 }' 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:40.581 16:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:40.840 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:31:40.840 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.840 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:40.840 [2024-11-05 16:01:13.185757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:40.840 BaseBdev3 00:31:40.840 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.840 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:31:40.840 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:31:40.841 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:31:40.841 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:31:40.841 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:31:40.841 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:31:40.841 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:31:40.841 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.841 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:40.841 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.841 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:40.841 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.841 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:40.841 [ 00:31:40.841 { 00:31:40.841 "name": "BaseBdev3", 00:31:40.841 "aliases": [ 00:31:40.841 "475c3241-fb7a-44f9-a179-256d901b9a5a" 00:31:40.841 ], 00:31:40.841 "product_name": "Malloc disk", 00:31:40.841 "block_size": 512, 00:31:40.841 "num_blocks": 65536, 00:31:40.841 "uuid": "475c3241-fb7a-44f9-a179-256d901b9a5a", 00:31:40.841 "assigned_rate_limits": { 00:31:40.841 "rw_ios_per_sec": 0, 00:31:40.841 "rw_mbytes_per_sec": 0, 00:31:40.841 "r_mbytes_per_sec": 0, 00:31:40.841 "w_mbytes_per_sec": 0 00:31:40.841 }, 00:31:40.841 "claimed": true, 00:31:40.841 "claim_type": "exclusive_write", 00:31:40.841 "zoned": false, 00:31:40.841 "supported_io_types": { 00:31:40.841 "read": true, 00:31:40.841 "write": true, 00:31:40.841 "unmap": true, 00:31:40.841 "flush": true, 00:31:40.841 "reset": true, 00:31:40.841 "nvme_admin": false, 00:31:40.841 "nvme_io": false, 00:31:40.841 "nvme_io_md": false, 00:31:40.841 "write_zeroes": true, 00:31:40.841 "zcopy": true, 00:31:40.841 "get_zone_info": false, 00:31:40.841 "zone_management": false, 00:31:40.841 "zone_append": false, 00:31:40.841 "compare": false, 00:31:40.841 "compare_and_write": false, 00:31:40.841 "abort": true, 00:31:40.841 "seek_hole": false, 00:31:40.841 "seek_data": false, 00:31:40.841 "copy": true, 00:31:40.841 "nvme_iov_md": false 00:31:40.841 }, 00:31:40.841 "memory_domains": [ 00:31:40.841 { 00:31:40.841 "dma_device_id": "system", 00:31:40.841 "dma_device_type": 1 00:31:40.841 }, 00:31:40.841 { 00:31:40.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:40.841 "dma_device_type": 2 00:31:40.841 } 00:31:40.841 ], 00:31:40.841 "driver_specific": {} 00:31:40.841 } 00:31:40.841 ] 00:31:40.841 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.841 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:31:40.841 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:31:40.841 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:40.841 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:31:40.841 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:40.841 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:40.841 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:40.841 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:40.841 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:40.841 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:40.841 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:40.841 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:40.841 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:40.841 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:40.841 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:40.841 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.841 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:40.841 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.841 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:40.841 "name": "Existed_Raid", 00:31:40.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:40.841 "strip_size_kb": 64, 00:31:40.841 "state": "configuring", 00:31:40.841 "raid_level": "raid5f", 00:31:40.841 "superblock": false, 00:31:40.841 "num_base_bdevs": 4, 00:31:40.841 "num_base_bdevs_discovered": 3, 00:31:40.841 "num_base_bdevs_operational": 4, 00:31:40.841 "base_bdevs_list": [ 00:31:40.841 { 00:31:40.841 "name": "BaseBdev1", 00:31:40.841 "uuid": "cab33f24-6e89-4eb9-a885-d65c196bafb3", 00:31:40.841 "is_configured": true, 00:31:40.841 "data_offset": 0, 00:31:40.841 "data_size": 65536 00:31:40.841 }, 00:31:40.841 { 00:31:40.841 "name": "BaseBdev2", 00:31:40.841 "uuid": "78a32a73-39d0-4f73-98ef-c454740bf1b5", 00:31:40.841 "is_configured": true, 00:31:40.841 "data_offset": 0, 00:31:40.841 "data_size": 65536 00:31:40.841 }, 00:31:40.841 { 00:31:40.841 "name": "BaseBdev3", 00:31:40.841 "uuid": "475c3241-fb7a-44f9-a179-256d901b9a5a", 00:31:40.841 "is_configured": true, 00:31:40.841 "data_offset": 0, 00:31:40.841 "data_size": 65536 00:31:40.841 }, 00:31:40.841 { 00:31:40.841 "name": "BaseBdev4", 00:31:40.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:40.841 "is_configured": false, 00:31:40.841 "data_offset": 0, 00:31:40.841 "data_size": 0 00:31:40.841 } 00:31:40.841 ] 00:31:40.841 }' 00:31:40.841 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:40.841 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:41.407 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:31:41.407 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.407 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:41.407 [2024-11-05 16:01:13.551654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:31:41.407 [2024-11-05 16:01:13.551701] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:31:41.407 [2024-11-05 16:01:13.551707] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:31:41.407 [2024-11-05 16:01:13.551923] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:31:41.407 [2024-11-05 16:01:13.555742] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:31:41.407 [2024-11-05 16:01:13.555765] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:31:41.407 [2024-11-05 16:01:13.555974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:41.407 BaseBdev4 00:31:41.407 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.407 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:31:41.407 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:31:41.407 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:31:41.407 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:31:41.407 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:31:41.407 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:31:41.407 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:31:41.407 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.407 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:41.407 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.407 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:31:41.407 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.407 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:41.407 [ 00:31:41.407 { 00:31:41.407 "name": "BaseBdev4", 00:31:41.407 "aliases": [ 00:31:41.407 "212cd64e-5e43-4229-b322-f369f2d59d20" 00:31:41.407 ], 00:31:41.407 "product_name": "Malloc disk", 00:31:41.407 "block_size": 512, 00:31:41.408 "num_blocks": 65536, 00:31:41.408 "uuid": "212cd64e-5e43-4229-b322-f369f2d59d20", 00:31:41.408 "assigned_rate_limits": { 00:31:41.408 "rw_ios_per_sec": 0, 00:31:41.408 "rw_mbytes_per_sec": 0, 00:31:41.408 "r_mbytes_per_sec": 0, 00:31:41.408 "w_mbytes_per_sec": 0 00:31:41.408 }, 00:31:41.408 "claimed": true, 00:31:41.408 "claim_type": "exclusive_write", 00:31:41.408 "zoned": false, 00:31:41.408 "supported_io_types": { 00:31:41.408 "read": true, 00:31:41.408 "write": true, 00:31:41.408 "unmap": true, 00:31:41.408 "flush": true, 00:31:41.408 "reset": true, 00:31:41.408 "nvme_admin": false, 00:31:41.408 "nvme_io": false, 00:31:41.408 "nvme_io_md": false, 00:31:41.408 "write_zeroes": true, 00:31:41.408 "zcopy": true, 00:31:41.408 "get_zone_info": false, 00:31:41.408 "zone_management": false, 00:31:41.408 "zone_append": false, 00:31:41.408 "compare": false, 00:31:41.408 "compare_and_write": false, 00:31:41.408 "abort": true, 00:31:41.408 "seek_hole": false, 00:31:41.408 "seek_data": false, 00:31:41.408 "copy": true, 00:31:41.408 "nvme_iov_md": false 00:31:41.408 }, 00:31:41.408 "memory_domains": [ 00:31:41.408 { 00:31:41.408 "dma_device_id": "system", 00:31:41.408 "dma_device_type": 1 00:31:41.408 }, 00:31:41.408 { 00:31:41.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:41.408 "dma_device_type": 2 00:31:41.408 } 00:31:41.408 ], 00:31:41.408 "driver_specific": {} 00:31:41.408 } 00:31:41.408 ] 00:31:41.408 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.408 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:31:41.408 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:31:41.408 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:41.408 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:31:41.408 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:41.408 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:41.408 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:41.408 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:41.408 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:41.408 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:41.408 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:41.408 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:41.408 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:41.408 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:41.408 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:41.408 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.408 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:41.408 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.408 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:41.408 "name": "Existed_Raid", 00:31:41.408 "uuid": "3c81493f-be78-46ac-9a65-e04d3f7f7e12", 00:31:41.408 "strip_size_kb": 64, 00:31:41.408 "state": "online", 00:31:41.408 "raid_level": "raid5f", 00:31:41.408 "superblock": false, 00:31:41.408 "num_base_bdevs": 4, 00:31:41.408 "num_base_bdevs_discovered": 4, 00:31:41.408 "num_base_bdevs_operational": 4, 00:31:41.408 "base_bdevs_list": [ 00:31:41.408 { 00:31:41.408 "name": "BaseBdev1", 00:31:41.408 "uuid": "cab33f24-6e89-4eb9-a885-d65c196bafb3", 00:31:41.408 "is_configured": true, 00:31:41.408 "data_offset": 0, 00:31:41.408 "data_size": 65536 00:31:41.408 }, 00:31:41.408 { 00:31:41.408 "name": "BaseBdev2", 00:31:41.408 "uuid": "78a32a73-39d0-4f73-98ef-c454740bf1b5", 00:31:41.408 "is_configured": true, 00:31:41.408 "data_offset": 0, 00:31:41.408 "data_size": 65536 00:31:41.408 }, 00:31:41.408 { 00:31:41.408 "name": "BaseBdev3", 00:31:41.408 "uuid": "475c3241-fb7a-44f9-a179-256d901b9a5a", 00:31:41.408 "is_configured": true, 00:31:41.408 "data_offset": 0, 00:31:41.408 "data_size": 65536 00:31:41.408 }, 00:31:41.408 { 00:31:41.408 "name": "BaseBdev4", 00:31:41.408 "uuid": "212cd64e-5e43-4229-b322-f369f2d59d20", 00:31:41.408 "is_configured": true, 00:31:41.408 "data_offset": 0, 00:31:41.408 "data_size": 65536 00:31:41.408 } 00:31:41.408 ] 00:31:41.408 }' 00:31:41.408 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:41.408 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:41.666 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:31:41.666 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:31:41.666 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:41.666 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:41.666 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:31:41.666 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:41.666 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:41.666 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:31:41.666 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.666 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:41.666 [2024-11-05 16:01:13.900399] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:41.666 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.666 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:41.666 "name": "Existed_Raid", 00:31:41.667 "aliases": [ 00:31:41.667 "3c81493f-be78-46ac-9a65-e04d3f7f7e12" 00:31:41.667 ], 00:31:41.667 "product_name": "Raid Volume", 00:31:41.667 "block_size": 512, 00:31:41.667 "num_blocks": 196608, 00:31:41.667 "uuid": "3c81493f-be78-46ac-9a65-e04d3f7f7e12", 00:31:41.667 "assigned_rate_limits": { 00:31:41.667 "rw_ios_per_sec": 0, 00:31:41.667 "rw_mbytes_per_sec": 0, 00:31:41.667 "r_mbytes_per_sec": 0, 00:31:41.667 "w_mbytes_per_sec": 0 00:31:41.667 }, 00:31:41.667 "claimed": false, 00:31:41.667 "zoned": false, 00:31:41.667 "supported_io_types": { 00:31:41.667 "read": true, 00:31:41.667 "write": true, 00:31:41.667 "unmap": false, 00:31:41.667 "flush": false, 00:31:41.667 "reset": true, 00:31:41.667 "nvme_admin": false, 00:31:41.667 "nvme_io": false, 00:31:41.667 "nvme_io_md": false, 00:31:41.667 "write_zeroes": true, 00:31:41.667 "zcopy": false, 00:31:41.667 "get_zone_info": false, 00:31:41.667 "zone_management": false, 00:31:41.667 "zone_append": false, 00:31:41.667 "compare": false, 00:31:41.667 "compare_and_write": false, 00:31:41.667 "abort": false, 00:31:41.667 "seek_hole": false, 00:31:41.667 "seek_data": false, 00:31:41.667 "copy": false, 00:31:41.667 "nvme_iov_md": false 00:31:41.667 }, 00:31:41.667 "driver_specific": { 00:31:41.667 "raid": { 00:31:41.667 "uuid": "3c81493f-be78-46ac-9a65-e04d3f7f7e12", 00:31:41.667 "strip_size_kb": 64, 00:31:41.667 "state": "online", 00:31:41.667 "raid_level": "raid5f", 00:31:41.667 "superblock": false, 00:31:41.667 "num_base_bdevs": 4, 00:31:41.667 "num_base_bdevs_discovered": 4, 00:31:41.667 "num_base_bdevs_operational": 4, 00:31:41.667 "base_bdevs_list": [ 00:31:41.667 { 00:31:41.667 "name": "BaseBdev1", 00:31:41.667 "uuid": "cab33f24-6e89-4eb9-a885-d65c196bafb3", 00:31:41.667 "is_configured": true, 00:31:41.667 "data_offset": 0, 00:31:41.667 "data_size": 65536 00:31:41.667 }, 00:31:41.667 { 00:31:41.667 "name": "BaseBdev2", 00:31:41.667 "uuid": "78a32a73-39d0-4f73-98ef-c454740bf1b5", 00:31:41.667 "is_configured": true, 00:31:41.667 "data_offset": 0, 00:31:41.667 "data_size": 65536 00:31:41.667 }, 00:31:41.667 { 00:31:41.667 "name": "BaseBdev3", 00:31:41.667 "uuid": "475c3241-fb7a-44f9-a179-256d901b9a5a", 00:31:41.667 "is_configured": true, 00:31:41.667 "data_offset": 0, 00:31:41.667 "data_size": 65536 00:31:41.667 }, 00:31:41.667 { 00:31:41.667 "name": "BaseBdev4", 00:31:41.667 "uuid": "212cd64e-5e43-4229-b322-f369f2d59d20", 00:31:41.667 "is_configured": true, 00:31:41.667 "data_offset": 0, 00:31:41.667 "data_size": 65536 00:31:41.667 } 00:31:41.667 ] 00:31:41.667 } 00:31:41.667 } 00:31:41.667 }' 00:31:41.667 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:41.667 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:31:41.667 BaseBdev2 00:31:41.667 BaseBdev3 00:31:41.667 BaseBdev4' 00:31:41.667 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:41.667 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:41.667 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:41.667 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:31:41.667 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.667 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:41.667 16:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:41.667 16:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.667 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:41.667 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:41.667 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:41.667 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:31:41.667 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.667 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:41.667 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:41.667 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.667 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:41.667 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:41.667 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:41.667 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:31:41.667 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.667 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:41.667 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:41.667 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.667 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:41.667 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:41.926 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:41.926 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:31:41.926 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.926 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:41.926 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:41.926 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.926 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:41.926 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:41.926 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:31:41.926 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.926 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:41.926 [2024-11-05 16:01:14.116292] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:41.926 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.926 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:31:41.926 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:31:41.926 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:31:41.926 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:31:41.926 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:31:41.926 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:31:41.926 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:41.926 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:41.926 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:41.926 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:41.926 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:41.926 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:41.926 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:41.926 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:41.926 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:41.926 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:41.926 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:41.926 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.926 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:41.926 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.926 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:41.926 "name": "Existed_Raid", 00:31:41.926 "uuid": "3c81493f-be78-46ac-9a65-e04d3f7f7e12", 00:31:41.926 "strip_size_kb": 64, 00:31:41.926 "state": "online", 00:31:41.926 "raid_level": "raid5f", 00:31:41.926 "superblock": false, 00:31:41.926 "num_base_bdevs": 4, 00:31:41.926 "num_base_bdevs_discovered": 3, 00:31:41.926 "num_base_bdevs_operational": 3, 00:31:41.926 "base_bdevs_list": [ 00:31:41.926 { 00:31:41.926 "name": null, 00:31:41.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:41.926 "is_configured": false, 00:31:41.926 "data_offset": 0, 00:31:41.926 "data_size": 65536 00:31:41.926 }, 00:31:41.926 { 00:31:41.926 "name": "BaseBdev2", 00:31:41.926 "uuid": "78a32a73-39d0-4f73-98ef-c454740bf1b5", 00:31:41.926 "is_configured": true, 00:31:41.926 "data_offset": 0, 00:31:41.926 "data_size": 65536 00:31:41.926 }, 00:31:41.926 { 00:31:41.926 "name": "BaseBdev3", 00:31:41.927 "uuid": "475c3241-fb7a-44f9-a179-256d901b9a5a", 00:31:41.927 "is_configured": true, 00:31:41.927 "data_offset": 0, 00:31:41.927 "data_size": 65536 00:31:41.927 }, 00:31:41.927 { 00:31:41.927 "name": "BaseBdev4", 00:31:41.927 "uuid": "212cd64e-5e43-4229-b322-f369f2d59d20", 00:31:41.927 "is_configured": true, 00:31:41.927 "data_offset": 0, 00:31:41.927 "data_size": 65536 00:31:41.927 } 00:31:41.927 ] 00:31:41.927 }' 00:31:41.927 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:41.927 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.185 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:31:42.185 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:42.185 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:42.185 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.185 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.185 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:31:42.185 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.185 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:31:42.185 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:42.185 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:31:42.185 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.185 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.185 [2024-11-05 16:01:14.521241] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:42.185 [2024-11-05 16:01:14.521318] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:42.185 [2024-11-05 16:01:14.567116] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:42.185 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.185 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:31:42.185 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:42.185 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:31:42.185 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:42.185 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.185 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.185 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.185 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:31:42.185 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:42.185 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:31:42.185 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.185 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.185 [2024-11-05 16:01:14.599159] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:31:42.443 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.443 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:31:42.443 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:42.443 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:42.443 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:31:42.443 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.443 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.443 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.443 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:31:42.443 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:42.443 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:31:42.443 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.443 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.443 [2024-11-05 16:01:14.684888] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:31:42.443 [2024-11-05 16:01:14.684998] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:31:42.443 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.443 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:31:42.443 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:42.443 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:31:42.443 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:42.443 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.443 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.443 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.443 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:31:42.443 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:31:42.443 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:31:42.443 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:31:42.443 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:42.443 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:31:42.443 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.443 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.443 BaseBdev2 00:31:42.443 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.444 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:31:42.444 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:31:42.444 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:31:42.444 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:31:42.444 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:31:42.444 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:31:42.444 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:31:42.444 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.444 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.444 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.444 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:42.444 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.444 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.444 [ 00:31:42.444 { 00:31:42.444 "name": "BaseBdev2", 00:31:42.444 "aliases": [ 00:31:42.444 "4b9d28b9-70dc-4381-a2cf-8f07e3985eed" 00:31:42.444 ], 00:31:42.444 "product_name": "Malloc disk", 00:31:42.444 "block_size": 512, 00:31:42.444 "num_blocks": 65536, 00:31:42.444 "uuid": "4b9d28b9-70dc-4381-a2cf-8f07e3985eed", 00:31:42.444 "assigned_rate_limits": { 00:31:42.444 "rw_ios_per_sec": 0, 00:31:42.444 "rw_mbytes_per_sec": 0, 00:31:42.444 "r_mbytes_per_sec": 0, 00:31:42.444 "w_mbytes_per_sec": 0 00:31:42.444 }, 00:31:42.444 "claimed": false, 00:31:42.444 "zoned": false, 00:31:42.444 "supported_io_types": { 00:31:42.444 "read": true, 00:31:42.444 "write": true, 00:31:42.444 "unmap": true, 00:31:42.444 "flush": true, 00:31:42.444 "reset": true, 00:31:42.444 "nvme_admin": false, 00:31:42.444 "nvme_io": false, 00:31:42.444 "nvme_io_md": false, 00:31:42.444 "write_zeroes": true, 00:31:42.444 "zcopy": true, 00:31:42.444 "get_zone_info": false, 00:31:42.444 "zone_management": false, 00:31:42.444 "zone_append": false, 00:31:42.444 "compare": false, 00:31:42.444 "compare_and_write": false, 00:31:42.444 "abort": true, 00:31:42.444 "seek_hole": false, 00:31:42.444 "seek_data": false, 00:31:42.444 "copy": true, 00:31:42.444 "nvme_iov_md": false 00:31:42.444 }, 00:31:42.444 "memory_domains": [ 00:31:42.444 { 00:31:42.444 "dma_device_id": "system", 00:31:42.444 "dma_device_type": 1 00:31:42.444 }, 00:31:42.444 { 00:31:42.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:42.444 "dma_device_type": 2 00:31:42.444 } 00:31:42.444 ], 00:31:42.444 "driver_specific": {} 00:31:42.444 } 00:31:42.444 ] 00:31:42.444 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.444 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:31:42.444 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:31:42.444 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:42.444 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:31:42.444 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.444 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.444 BaseBdev3 00:31:42.444 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.444 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:31:42.444 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:31:42.444 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:31:42.444 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:31:42.444 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:31:42.444 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:31:42.444 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:31:42.444 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.444 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.444 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.444 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:42.444 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.444 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.444 [ 00:31:42.444 { 00:31:42.444 "name": "BaseBdev3", 00:31:42.444 "aliases": [ 00:31:42.702 "6f88f3a4-be2f-4698-9bb0-4be52de256d1" 00:31:42.702 ], 00:31:42.702 "product_name": "Malloc disk", 00:31:42.702 "block_size": 512, 00:31:42.702 "num_blocks": 65536, 00:31:42.702 "uuid": "6f88f3a4-be2f-4698-9bb0-4be52de256d1", 00:31:42.702 "assigned_rate_limits": { 00:31:42.702 "rw_ios_per_sec": 0, 00:31:42.702 "rw_mbytes_per_sec": 0, 00:31:42.702 "r_mbytes_per_sec": 0, 00:31:42.702 "w_mbytes_per_sec": 0 00:31:42.702 }, 00:31:42.702 "claimed": false, 00:31:42.702 "zoned": false, 00:31:42.702 "supported_io_types": { 00:31:42.702 "read": true, 00:31:42.702 "write": true, 00:31:42.702 "unmap": true, 00:31:42.702 "flush": true, 00:31:42.702 "reset": true, 00:31:42.702 "nvme_admin": false, 00:31:42.702 "nvme_io": false, 00:31:42.702 "nvme_io_md": false, 00:31:42.702 "write_zeroes": true, 00:31:42.702 "zcopy": true, 00:31:42.702 "get_zone_info": false, 00:31:42.702 "zone_management": false, 00:31:42.702 "zone_append": false, 00:31:42.702 "compare": false, 00:31:42.702 "compare_and_write": false, 00:31:42.702 "abort": true, 00:31:42.702 "seek_hole": false, 00:31:42.702 "seek_data": false, 00:31:42.702 "copy": true, 00:31:42.702 "nvme_iov_md": false 00:31:42.702 }, 00:31:42.702 "memory_domains": [ 00:31:42.702 { 00:31:42.702 "dma_device_id": "system", 00:31:42.702 "dma_device_type": 1 00:31:42.702 }, 00:31:42.702 { 00:31:42.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:42.702 "dma_device_type": 2 00:31:42.702 } 00:31:42.702 ], 00:31:42.702 "driver_specific": {} 00:31:42.702 } 00:31:42.702 ] 00:31:42.702 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.702 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:31:42.702 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:31:42.702 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:42.702 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:31:42.702 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.702 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.702 BaseBdev4 00:31:42.702 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.702 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:31:42.702 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:31:42.702 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:31:42.702 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:31:42.702 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:31:42.703 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:31:42.703 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:31:42.703 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.703 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.703 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.703 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:31:42.703 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.703 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.703 [ 00:31:42.703 { 00:31:42.703 "name": "BaseBdev4", 00:31:42.703 "aliases": [ 00:31:42.703 "91b58103-015c-4552-9558-a52e3baf5ebe" 00:31:42.703 ], 00:31:42.703 "product_name": "Malloc disk", 00:31:42.703 "block_size": 512, 00:31:42.703 "num_blocks": 65536, 00:31:42.703 "uuid": "91b58103-015c-4552-9558-a52e3baf5ebe", 00:31:42.703 "assigned_rate_limits": { 00:31:42.703 "rw_ios_per_sec": 0, 00:31:42.703 "rw_mbytes_per_sec": 0, 00:31:42.703 "r_mbytes_per_sec": 0, 00:31:42.703 "w_mbytes_per_sec": 0 00:31:42.703 }, 00:31:42.703 "claimed": false, 00:31:42.703 "zoned": false, 00:31:42.703 "supported_io_types": { 00:31:42.703 "read": true, 00:31:42.703 "write": true, 00:31:42.703 "unmap": true, 00:31:42.703 "flush": true, 00:31:42.703 "reset": true, 00:31:42.703 "nvme_admin": false, 00:31:42.703 "nvme_io": false, 00:31:42.703 "nvme_io_md": false, 00:31:42.703 "write_zeroes": true, 00:31:42.703 "zcopy": true, 00:31:42.703 "get_zone_info": false, 00:31:42.703 "zone_management": false, 00:31:42.703 "zone_append": false, 00:31:42.703 "compare": false, 00:31:42.703 "compare_and_write": false, 00:31:42.703 "abort": true, 00:31:42.703 "seek_hole": false, 00:31:42.703 "seek_data": false, 00:31:42.703 "copy": true, 00:31:42.703 "nvme_iov_md": false 00:31:42.703 }, 00:31:42.703 "memory_domains": [ 00:31:42.703 { 00:31:42.703 "dma_device_id": "system", 00:31:42.703 "dma_device_type": 1 00:31:42.703 }, 00:31:42.703 { 00:31:42.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:42.703 "dma_device_type": 2 00:31:42.703 } 00:31:42.703 ], 00:31:42.703 "driver_specific": {} 00:31:42.703 } 00:31:42.703 ] 00:31:42.703 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.703 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:31:42.703 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:31:42.703 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:42.703 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:31:42.703 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.703 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.703 [2024-11-05 16:01:14.924519] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:42.703 [2024-11-05 16:01:14.924558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:42.703 [2024-11-05 16:01:14.924575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:42.703 [2024-11-05 16:01:14.925992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:42.703 [2024-11-05 16:01:14.926032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:31:42.703 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.703 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:31:42.703 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:42.703 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:42.703 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:42.703 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:42.703 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:42.703 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:42.703 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:42.703 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:42.703 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:42.703 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:42.703 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.703 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:42.703 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.703 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.703 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:42.703 "name": "Existed_Raid", 00:31:42.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:42.703 "strip_size_kb": 64, 00:31:42.703 "state": "configuring", 00:31:42.703 "raid_level": "raid5f", 00:31:42.703 "superblock": false, 00:31:42.703 "num_base_bdevs": 4, 00:31:42.703 "num_base_bdevs_discovered": 3, 00:31:42.703 "num_base_bdevs_operational": 4, 00:31:42.703 "base_bdevs_list": [ 00:31:42.703 { 00:31:42.703 "name": "BaseBdev1", 00:31:42.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:42.703 "is_configured": false, 00:31:42.703 "data_offset": 0, 00:31:42.703 "data_size": 0 00:31:42.703 }, 00:31:42.703 { 00:31:42.703 "name": "BaseBdev2", 00:31:42.703 "uuid": "4b9d28b9-70dc-4381-a2cf-8f07e3985eed", 00:31:42.703 "is_configured": true, 00:31:42.703 "data_offset": 0, 00:31:42.703 "data_size": 65536 00:31:42.703 }, 00:31:42.703 { 00:31:42.703 "name": "BaseBdev3", 00:31:42.703 "uuid": "6f88f3a4-be2f-4698-9bb0-4be52de256d1", 00:31:42.703 "is_configured": true, 00:31:42.703 "data_offset": 0, 00:31:42.703 "data_size": 65536 00:31:42.703 }, 00:31:42.703 { 00:31:42.703 "name": "BaseBdev4", 00:31:42.703 "uuid": "91b58103-015c-4552-9558-a52e3baf5ebe", 00:31:42.703 "is_configured": true, 00:31:42.703 "data_offset": 0, 00:31:42.703 "data_size": 65536 00:31:42.703 } 00:31:42.703 ] 00:31:42.703 }' 00:31:42.703 16:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:42.703 16:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.962 16:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:31:42.962 16:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.962 16:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.962 [2024-11-05 16:01:15.276591] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:42.962 16:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.962 16:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:31:42.962 16:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:42.962 16:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:42.962 16:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:42.962 16:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:42.962 16:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:42.962 16:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:42.962 16:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:42.962 16:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:42.962 16:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:42.962 16:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:42.962 16:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:42.962 16:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.962 16:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.962 16:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.962 16:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:42.962 "name": "Existed_Raid", 00:31:42.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:42.962 "strip_size_kb": 64, 00:31:42.962 "state": "configuring", 00:31:42.962 "raid_level": "raid5f", 00:31:42.962 "superblock": false, 00:31:42.962 "num_base_bdevs": 4, 00:31:42.962 "num_base_bdevs_discovered": 2, 00:31:42.962 "num_base_bdevs_operational": 4, 00:31:42.962 "base_bdevs_list": [ 00:31:42.962 { 00:31:42.962 "name": "BaseBdev1", 00:31:42.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:42.962 "is_configured": false, 00:31:42.962 "data_offset": 0, 00:31:42.962 "data_size": 0 00:31:42.962 }, 00:31:42.962 { 00:31:42.962 "name": null, 00:31:42.962 "uuid": "4b9d28b9-70dc-4381-a2cf-8f07e3985eed", 00:31:42.962 "is_configured": false, 00:31:42.962 "data_offset": 0, 00:31:42.962 "data_size": 65536 00:31:42.962 }, 00:31:42.962 { 00:31:42.962 "name": "BaseBdev3", 00:31:42.962 "uuid": "6f88f3a4-be2f-4698-9bb0-4be52de256d1", 00:31:42.962 "is_configured": true, 00:31:42.962 "data_offset": 0, 00:31:42.962 "data_size": 65536 00:31:42.962 }, 00:31:42.962 { 00:31:42.962 "name": "BaseBdev4", 00:31:42.962 "uuid": "91b58103-015c-4552-9558-a52e3baf5ebe", 00:31:42.962 "is_configured": true, 00:31:42.962 "data_offset": 0, 00:31:42.962 "data_size": 65536 00:31:42.962 } 00:31:42.962 ] 00:31:42.962 }' 00:31:42.962 16:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:42.962 16:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.220 16:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:31:43.220 16:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:43.220 16:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.220 16:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.220 16:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.220 16:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:31:43.220 16:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:31:43.220 16:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.220 16:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.478 [2024-11-05 16:01:15.654160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:43.478 BaseBdev1 00:31:43.478 16:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.478 16:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:31:43.478 16:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:31:43.478 16:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:31:43.478 16:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:31:43.478 16:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:31:43.478 16:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:31:43.478 16:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:31:43.478 16:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.478 16:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.478 16:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.478 16:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:43.478 16:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.478 16:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.478 [ 00:31:43.478 { 00:31:43.478 "name": "BaseBdev1", 00:31:43.478 "aliases": [ 00:31:43.478 "76b9c269-b537-4e60-81f1-1b0cd840b8a2" 00:31:43.478 ], 00:31:43.478 "product_name": "Malloc disk", 00:31:43.478 "block_size": 512, 00:31:43.478 "num_blocks": 65536, 00:31:43.478 "uuid": "76b9c269-b537-4e60-81f1-1b0cd840b8a2", 00:31:43.478 "assigned_rate_limits": { 00:31:43.478 "rw_ios_per_sec": 0, 00:31:43.478 "rw_mbytes_per_sec": 0, 00:31:43.478 "r_mbytes_per_sec": 0, 00:31:43.478 "w_mbytes_per_sec": 0 00:31:43.478 }, 00:31:43.478 "claimed": true, 00:31:43.478 "claim_type": "exclusive_write", 00:31:43.478 "zoned": false, 00:31:43.478 "supported_io_types": { 00:31:43.478 "read": true, 00:31:43.478 "write": true, 00:31:43.478 "unmap": true, 00:31:43.478 "flush": true, 00:31:43.478 "reset": true, 00:31:43.478 "nvme_admin": false, 00:31:43.478 "nvme_io": false, 00:31:43.478 "nvme_io_md": false, 00:31:43.478 "write_zeroes": true, 00:31:43.478 "zcopy": true, 00:31:43.478 "get_zone_info": false, 00:31:43.478 "zone_management": false, 00:31:43.478 "zone_append": false, 00:31:43.478 "compare": false, 00:31:43.478 "compare_and_write": false, 00:31:43.478 "abort": true, 00:31:43.478 "seek_hole": false, 00:31:43.478 "seek_data": false, 00:31:43.478 "copy": true, 00:31:43.478 "nvme_iov_md": false 00:31:43.478 }, 00:31:43.478 "memory_domains": [ 00:31:43.478 { 00:31:43.478 "dma_device_id": "system", 00:31:43.478 "dma_device_type": 1 00:31:43.478 }, 00:31:43.478 { 00:31:43.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:43.478 "dma_device_type": 2 00:31:43.478 } 00:31:43.478 ], 00:31:43.478 "driver_specific": {} 00:31:43.478 } 00:31:43.478 ] 00:31:43.478 16:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.478 16:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:31:43.479 16:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:31:43.479 16:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:43.479 16:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:43.479 16:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:43.479 16:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:43.479 16:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:43.479 16:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:43.479 16:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:43.479 16:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:43.479 16:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:43.479 16:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:43.479 16:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:43.479 16:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.479 16:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.479 16:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.479 16:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:43.479 "name": "Existed_Raid", 00:31:43.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:43.479 "strip_size_kb": 64, 00:31:43.479 "state": "configuring", 00:31:43.479 "raid_level": "raid5f", 00:31:43.479 "superblock": false, 00:31:43.479 "num_base_bdevs": 4, 00:31:43.479 "num_base_bdevs_discovered": 3, 00:31:43.479 "num_base_bdevs_operational": 4, 00:31:43.479 "base_bdevs_list": [ 00:31:43.479 { 00:31:43.479 "name": "BaseBdev1", 00:31:43.479 "uuid": "76b9c269-b537-4e60-81f1-1b0cd840b8a2", 00:31:43.479 "is_configured": true, 00:31:43.479 "data_offset": 0, 00:31:43.479 "data_size": 65536 00:31:43.479 }, 00:31:43.479 { 00:31:43.479 "name": null, 00:31:43.479 "uuid": "4b9d28b9-70dc-4381-a2cf-8f07e3985eed", 00:31:43.479 "is_configured": false, 00:31:43.479 "data_offset": 0, 00:31:43.479 "data_size": 65536 00:31:43.479 }, 00:31:43.479 { 00:31:43.479 "name": "BaseBdev3", 00:31:43.479 "uuid": "6f88f3a4-be2f-4698-9bb0-4be52de256d1", 00:31:43.479 "is_configured": true, 00:31:43.479 "data_offset": 0, 00:31:43.479 "data_size": 65536 00:31:43.479 }, 00:31:43.479 { 00:31:43.479 "name": "BaseBdev4", 00:31:43.479 "uuid": "91b58103-015c-4552-9558-a52e3baf5ebe", 00:31:43.479 "is_configured": true, 00:31:43.479 "data_offset": 0, 00:31:43.479 "data_size": 65536 00:31:43.479 } 00:31:43.479 ] 00:31:43.479 }' 00:31:43.479 16:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:43.479 16:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.737 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:43.737 16:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.737 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:31:43.737 16:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.737 16:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.737 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:31:43.737 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:31:43.737 16:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.737 16:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.737 [2024-11-05 16:01:16.034278] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:31:43.737 16:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.737 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:31:43.737 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:43.737 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:43.737 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:43.737 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:43.737 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:43.737 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:43.737 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:43.737 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:43.737 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:43.737 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:43.737 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:43.737 16:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.737 16:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.737 16:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.737 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:43.737 "name": "Existed_Raid", 00:31:43.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:43.737 "strip_size_kb": 64, 00:31:43.737 "state": "configuring", 00:31:43.737 "raid_level": "raid5f", 00:31:43.737 "superblock": false, 00:31:43.737 "num_base_bdevs": 4, 00:31:43.737 "num_base_bdevs_discovered": 2, 00:31:43.737 "num_base_bdevs_operational": 4, 00:31:43.737 "base_bdevs_list": [ 00:31:43.737 { 00:31:43.737 "name": "BaseBdev1", 00:31:43.737 "uuid": "76b9c269-b537-4e60-81f1-1b0cd840b8a2", 00:31:43.737 "is_configured": true, 00:31:43.737 "data_offset": 0, 00:31:43.737 "data_size": 65536 00:31:43.737 }, 00:31:43.737 { 00:31:43.737 "name": null, 00:31:43.737 "uuid": "4b9d28b9-70dc-4381-a2cf-8f07e3985eed", 00:31:43.737 "is_configured": false, 00:31:43.737 "data_offset": 0, 00:31:43.737 "data_size": 65536 00:31:43.737 }, 00:31:43.737 { 00:31:43.737 "name": null, 00:31:43.737 "uuid": "6f88f3a4-be2f-4698-9bb0-4be52de256d1", 00:31:43.737 "is_configured": false, 00:31:43.737 "data_offset": 0, 00:31:43.737 "data_size": 65536 00:31:43.737 }, 00:31:43.737 { 00:31:43.737 "name": "BaseBdev4", 00:31:43.737 "uuid": "91b58103-015c-4552-9558-a52e3baf5ebe", 00:31:43.737 "is_configured": true, 00:31:43.737 "data_offset": 0, 00:31:43.737 "data_size": 65536 00:31:43.737 } 00:31:43.737 ] 00:31:43.737 }' 00:31:43.737 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:43.737 16:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.998 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:43.998 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:31:43.998 16:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.998 16:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.998 16:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.998 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:31:43.998 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:31:43.998 16:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.998 16:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.998 [2024-11-05 16:01:16.374338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:43.998 16:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.998 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:31:43.998 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:43.998 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:43.998 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:43.998 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:43.998 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:43.998 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:43.998 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:43.998 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:43.998 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:43.998 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:43.998 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:43.998 16:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.998 16:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.998 16:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.259 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:44.259 "name": "Existed_Raid", 00:31:44.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:44.259 "strip_size_kb": 64, 00:31:44.260 "state": "configuring", 00:31:44.260 "raid_level": "raid5f", 00:31:44.260 "superblock": false, 00:31:44.260 "num_base_bdevs": 4, 00:31:44.260 "num_base_bdevs_discovered": 3, 00:31:44.260 "num_base_bdevs_operational": 4, 00:31:44.260 "base_bdevs_list": [ 00:31:44.260 { 00:31:44.260 "name": "BaseBdev1", 00:31:44.260 "uuid": "76b9c269-b537-4e60-81f1-1b0cd840b8a2", 00:31:44.260 "is_configured": true, 00:31:44.260 "data_offset": 0, 00:31:44.260 "data_size": 65536 00:31:44.260 }, 00:31:44.260 { 00:31:44.260 "name": null, 00:31:44.260 "uuid": "4b9d28b9-70dc-4381-a2cf-8f07e3985eed", 00:31:44.260 "is_configured": false, 00:31:44.260 "data_offset": 0, 00:31:44.260 "data_size": 65536 00:31:44.260 }, 00:31:44.260 { 00:31:44.260 "name": "BaseBdev3", 00:31:44.260 "uuid": "6f88f3a4-be2f-4698-9bb0-4be52de256d1", 00:31:44.260 "is_configured": true, 00:31:44.260 "data_offset": 0, 00:31:44.260 "data_size": 65536 00:31:44.260 }, 00:31:44.260 { 00:31:44.260 "name": "BaseBdev4", 00:31:44.260 "uuid": "91b58103-015c-4552-9558-a52e3baf5ebe", 00:31:44.260 "is_configured": true, 00:31:44.260 "data_offset": 0, 00:31:44.260 "data_size": 65536 00:31:44.260 } 00:31:44.260 ] 00:31:44.260 }' 00:31:44.260 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:44.260 16:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:44.520 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:44.520 16:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.520 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:31:44.520 16:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:44.520 16:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.520 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:31:44.520 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:31:44.520 16:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.520 16:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:44.520 [2024-11-05 16:01:16.726415] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:44.520 16:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.520 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:31:44.520 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:44.520 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:44.521 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:44.521 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:44.521 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:44.521 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:44.521 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:44.521 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:44.521 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:44.521 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:44.521 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:44.521 16:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.521 16:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:44.521 16:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.521 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:44.521 "name": "Existed_Raid", 00:31:44.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:44.521 "strip_size_kb": 64, 00:31:44.521 "state": "configuring", 00:31:44.521 "raid_level": "raid5f", 00:31:44.521 "superblock": false, 00:31:44.521 "num_base_bdevs": 4, 00:31:44.521 "num_base_bdevs_discovered": 2, 00:31:44.521 "num_base_bdevs_operational": 4, 00:31:44.521 "base_bdevs_list": [ 00:31:44.521 { 00:31:44.521 "name": null, 00:31:44.521 "uuid": "76b9c269-b537-4e60-81f1-1b0cd840b8a2", 00:31:44.521 "is_configured": false, 00:31:44.521 "data_offset": 0, 00:31:44.521 "data_size": 65536 00:31:44.521 }, 00:31:44.521 { 00:31:44.521 "name": null, 00:31:44.521 "uuid": "4b9d28b9-70dc-4381-a2cf-8f07e3985eed", 00:31:44.521 "is_configured": false, 00:31:44.521 "data_offset": 0, 00:31:44.521 "data_size": 65536 00:31:44.521 }, 00:31:44.521 { 00:31:44.521 "name": "BaseBdev3", 00:31:44.521 "uuid": "6f88f3a4-be2f-4698-9bb0-4be52de256d1", 00:31:44.521 "is_configured": true, 00:31:44.521 "data_offset": 0, 00:31:44.521 "data_size": 65536 00:31:44.521 }, 00:31:44.521 { 00:31:44.521 "name": "BaseBdev4", 00:31:44.521 "uuid": "91b58103-015c-4552-9558-a52e3baf5ebe", 00:31:44.521 "is_configured": true, 00:31:44.521 "data_offset": 0, 00:31:44.521 "data_size": 65536 00:31:44.521 } 00:31:44.521 ] 00:31:44.521 }' 00:31:44.521 16:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:44.521 16:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:44.807 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:31:44.807 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:44.807 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.807 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:44.807 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.807 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:31:44.807 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:31:44.807 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.807 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:44.807 [2024-11-05 16:01:17.112086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:44.807 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.807 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:31:44.807 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:44.807 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:44.807 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:44.807 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:44.807 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:44.807 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:44.807 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:44.807 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:44.807 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:44.807 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:44.807 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:44.808 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.808 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:44.808 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.808 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:44.808 "name": "Existed_Raid", 00:31:44.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:44.808 "strip_size_kb": 64, 00:31:44.808 "state": "configuring", 00:31:44.808 "raid_level": "raid5f", 00:31:44.808 "superblock": false, 00:31:44.808 "num_base_bdevs": 4, 00:31:44.808 "num_base_bdevs_discovered": 3, 00:31:44.808 "num_base_bdevs_operational": 4, 00:31:44.808 "base_bdevs_list": [ 00:31:44.808 { 00:31:44.808 "name": null, 00:31:44.808 "uuid": "76b9c269-b537-4e60-81f1-1b0cd840b8a2", 00:31:44.808 "is_configured": false, 00:31:44.808 "data_offset": 0, 00:31:44.808 "data_size": 65536 00:31:44.808 }, 00:31:44.808 { 00:31:44.808 "name": "BaseBdev2", 00:31:44.808 "uuid": "4b9d28b9-70dc-4381-a2cf-8f07e3985eed", 00:31:44.808 "is_configured": true, 00:31:44.808 "data_offset": 0, 00:31:44.808 "data_size": 65536 00:31:44.808 }, 00:31:44.808 { 00:31:44.808 "name": "BaseBdev3", 00:31:44.808 "uuid": "6f88f3a4-be2f-4698-9bb0-4be52de256d1", 00:31:44.808 "is_configured": true, 00:31:44.808 "data_offset": 0, 00:31:44.808 "data_size": 65536 00:31:44.808 }, 00:31:44.808 { 00:31:44.808 "name": "BaseBdev4", 00:31:44.808 "uuid": "91b58103-015c-4552-9558-a52e3baf5ebe", 00:31:44.808 "is_configured": true, 00:31:44.808 "data_offset": 0, 00:31:44.808 "data_size": 65536 00:31:44.808 } 00:31:44.808 ] 00:31:44.808 }' 00:31:44.808 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:44.808 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.068 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:45.068 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.068 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:31:45.068 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.068 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.068 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:31:45.068 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:45.068 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:31:45.068 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.068 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.068 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.329 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 76b9c269-b537-4e60-81f1-1b0cd840b8a2 00:31:45.329 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.329 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.329 [2024-11-05 16:01:17.522097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:31:45.329 [2024-11-05 16:01:17.522133] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:31:45.329 [2024-11-05 16:01:17.522138] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:31:45.329 [2024-11-05 16:01:17.522330] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:31:45.329 [2024-11-05 16:01:17.526058] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:31:45.329 [2024-11-05 16:01:17.526076] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:31:45.329 [2024-11-05 16:01:17.526259] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:45.329 NewBaseBdev 00:31:45.329 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.329 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:31:45.329 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:31:45.329 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:31:45.329 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:31:45.329 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:31:45.329 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:31:45.329 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:31:45.329 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.329 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.329 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.329 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:31:45.329 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.329 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.329 [ 00:31:45.329 { 00:31:45.329 "name": "NewBaseBdev", 00:31:45.329 "aliases": [ 00:31:45.329 "76b9c269-b537-4e60-81f1-1b0cd840b8a2" 00:31:45.329 ], 00:31:45.329 "product_name": "Malloc disk", 00:31:45.329 "block_size": 512, 00:31:45.329 "num_blocks": 65536, 00:31:45.329 "uuid": "76b9c269-b537-4e60-81f1-1b0cd840b8a2", 00:31:45.329 "assigned_rate_limits": { 00:31:45.329 "rw_ios_per_sec": 0, 00:31:45.329 "rw_mbytes_per_sec": 0, 00:31:45.329 "r_mbytes_per_sec": 0, 00:31:45.329 "w_mbytes_per_sec": 0 00:31:45.329 }, 00:31:45.329 "claimed": true, 00:31:45.329 "claim_type": "exclusive_write", 00:31:45.329 "zoned": false, 00:31:45.329 "supported_io_types": { 00:31:45.329 "read": true, 00:31:45.329 "write": true, 00:31:45.329 "unmap": true, 00:31:45.329 "flush": true, 00:31:45.329 "reset": true, 00:31:45.329 "nvme_admin": false, 00:31:45.329 "nvme_io": false, 00:31:45.329 "nvme_io_md": false, 00:31:45.329 "write_zeroes": true, 00:31:45.329 "zcopy": true, 00:31:45.329 "get_zone_info": false, 00:31:45.329 "zone_management": false, 00:31:45.329 "zone_append": false, 00:31:45.329 "compare": false, 00:31:45.329 "compare_and_write": false, 00:31:45.329 "abort": true, 00:31:45.329 "seek_hole": false, 00:31:45.329 "seek_data": false, 00:31:45.329 "copy": true, 00:31:45.329 "nvme_iov_md": false 00:31:45.329 }, 00:31:45.329 "memory_domains": [ 00:31:45.329 { 00:31:45.329 "dma_device_id": "system", 00:31:45.329 "dma_device_type": 1 00:31:45.329 }, 00:31:45.329 { 00:31:45.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:45.329 "dma_device_type": 2 00:31:45.329 } 00:31:45.329 ], 00:31:45.329 "driver_specific": {} 00:31:45.329 } 00:31:45.329 ] 00:31:45.329 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.329 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:31:45.329 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:31:45.329 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:45.329 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:45.329 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:45.329 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:45.329 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:45.329 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:45.329 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:45.329 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:45.329 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:45.329 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:45.329 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:45.329 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.329 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.329 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.329 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:45.329 "name": "Existed_Raid", 00:31:45.329 "uuid": "01c21838-6127-4ffb-a6d8-d671fe46084c", 00:31:45.329 "strip_size_kb": 64, 00:31:45.329 "state": "online", 00:31:45.329 "raid_level": "raid5f", 00:31:45.329 "superblock": false, 00:31:45.329 "num_base_bdevs": 4, 00:31:45.329 "num_base_bdevs_discovered": 4, 00:31:45.330 "num_base_bdevs_operational": 4, 00:31:45.330 "base_bdevs_list": [ 00:31:45.330 { 00:31:45.330 "name": "NewBaseBdev", 00:31:45.330 "uuid": "76b9c269-b537-4e60-81f1-1b0cd840b8a2", 00:31:45.330 "is_configured": true, 00:31:45.330 "data_offset": 0, 00:31:45.330 "data_size": 65536 00:31:45.330 }, 00:31:45.330 { 00:31:45.330 "name": "BaseBdev2", 00:31:45.330 "uuid": "4b9d28b9-70dc-4381-a2cf-8f07e3985eed", 00:31:45.330 "is_configured": true, 00:31:45.330 "data_offset": 0, 00:31:45.330 "data_size": 65536 00:31:45.330 }, 00:31:45.330 { 00:31:45.330 "name": "BaseBdev3", 00:31:45.330 "uuid": "6f88f3a4-be2f-4698-9bb0-4be52de256d1", 00:31:45.330 "is_configured": true, 00:31:45.330 "data_offset": 0, 00:31:45.330 "data_size": 65536 00:31:45.330 }, 00:31:45.330 { 00:31:45.330 "name": "BaseBdev4", 00:31:45.330 "uuid": "91b58103-015c-4552-9558-a52e3baf5ebe", 00:31:45.330 "is_configured": true, 00:31:45.330 "data_offset": 0, 00:31:45.330 "data_size": 65536 00:31:45.330 } 00:31:45.330 ] 00:31:45.330 }' 00:31:45.330 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:45.330 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.592 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:31:45.592 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:31:45.592 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:45.592 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:45.592 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:31:45.592 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:45.592 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:31:45.592 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:45.592 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.592 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.592 [2024-11-05 16:01:17.882717] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:45.592 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.592 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:45.592 "name": "Existed_Raid", 00:31:45.592 "aliases": [ 00:31:45.592 "01c21838-6127-4ffb-a6d8-d671fe46084c" 00:31:45.592 ], 00:31:45.592 "product_name": "Raid Volume", 00:31:45.592 "block_size": 512, 00:31:45.592 "num_blocks": 196608, 00:31:45.592 "uuid": "01c21838-6127-4ffb-a6d8-d671fe46084c", 00:31:45.592 "assigned_rate_limits": { 00:31:45.592 "rw_ios_per_sec": 0, 00:31:45.592 "rw_mbytes_per_sec": 0, 00:31:45.592 "r_mbytes_per_sec": 0, 00:31:45.592 "w_mbytes_per_sec": 0 00:31:45.592 }, 00:31:45.592 "claimed": false, 00:31:45.592 "zoned": false, 00:31:45.592 "supported_io_types": { 00:31:45.592 "read": true, 00:31:45.592 "write": true, 00:31:45.592 "unmap": false, 00:31:45.592 "flush": false, 00:31:45.592 "reset": true, 00:31:45.592 "nvme_admin": false, 00:31:45.592 "nvme_io": false, 00:31:45.592 "nvme_io_md": false, 00:31:45.592 "write_zeroes": true, 00:31:45.592 "zcopy": false, 00:31:45.592 "get_zone_info": false, 00:31:45.592 "zone_management": false, 00:31:45.592 "zone_append": false, 00:31:45.592 "compare": false, 00:31:45.592 "compare_and_write": false, 00:31:45.592 "abort": false, 00:31:45.592 "seek_hole": false, 00:31:45.592 "seek_data": false, 00:31:45.592 "copy": false, 00:31:45.592 "nvme_iov_md": false 00:31:45.592 }, 00:31:45.592 "driver_specific": { 00:31:45.592 "raid": { 00:31:45.592 "uuid": "01c21838-6127-4ffb-a6d8-d671fe46084c", 00:31:45.592 "strip_size_kb": 64, 00:31:45.592 "state": "online", 00:31:45.592 "raid_level": "raid5f", 00:31:45.592 "superblock": false, 00:31:45.592 "num_base_bdevs": 4, 00:31:45.592 "num_base_bdevs_discovered": 4, 00:31:45.592 "num_base_bdevs_operational": 4, 00:31:45.592 "base_bdevs_list": [ 00:31:45.592 { 00:31:45.592 "name": "NewBaseBdev", 00:31:45.592 "uuid": "76b9c269-b537-4e60-81f1-1b0cd840b8a2", 00:31:45.592 "is_configured": true, 00:31:45.592 "data_offset": 0, 00:31:45.592 "data_size": 65536 00:31:45.592 }, 00:31:45.592 { 00:31:45.592 "name": "BaseBdev2", 00:31:45.592 "uuid": "4b9d28b9-70dc-4381-a2cf-8f07e3985eed", 00:31:45.592 "is_configured": true, 00:31:45.592 "data_offset": 0, 00:31:45.592 "data_size": 65536 00:31:45.592 }, 00:31:45.592 { 00:31:45.592 "name": "BaseBdev3", 00:31:45.592 "uuid": "6f88f3a4-be2f-4698-9bb0-4be52de256d1", 00:31:45.592 "is_configured": true, 00:31:45.592 "data_offset": 0, 00:31:45.592 "data_size": 65536 00:31:45.592 }, 00:31:45.592 { 00:31:45.592 "name": "BaseBdev4", 00:31:45.592 "uuid": "91b58103-015c-4552-9558-a52e3baf5ebe", 00:31:45.592 "is_configured": true, 00:31:45.592 "data_offset": 0, 00:31:45.592 "data_size": 65536 00:31:45.592 } 00:31:45.592 ] 00:31:45.592 } 00:31:45.592 } 00:31:45.592 }' 00:31:45.592 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:45.592 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:31:45.592 BaseBdev2 00:31:45.592 BaseBdev3 00:31:45.592 BaseBdev4' 00:31:45.592 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:45.592 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:45.592 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:45.592 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:45.592 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:31:45.592 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.592 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.592 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.592 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:45.592 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:45.592 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:45.592 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:45.592 16:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:31:45.592 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.592 16:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.854 16:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.854 16:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:45.854 16:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:45.854 16:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:45.854 16:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:31:45.854 16:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.854 16:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.854 16:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:45.854 16:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.854 16:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:45.854 16:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:45.854 16:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:45.854 16:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:31:45.854 16:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:45.854 16:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.854 16:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.854 16:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.854 16:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:45.854 16:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:45.854 16:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:45.854 16:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.854 16:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.854 [2024-11-05 16:01:18.086533] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:45.854 [2024-11-05 16:01:18.086560] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:45.854 [2024-11-05 16:01:18.086611] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:45.854 [2024-11-05 16:01:18.086835] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:45.854 [2024-11-05 16:01:18.086853] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:31:45.854 16:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.854 16:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80146 00:31:45.854 16:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 80146 ']' 00:31:45.854 16:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 80146 00:31:45.854 16:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:31:45.854 16:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:45.854 16:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80146 00:31:45.854 16:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:45.854 16:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:45.854 16:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80146' 00:31:45.854 killing process with pid 80146 00:31:45.854 16:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 80146 00:31:45.854 [2024-11-05 16:01:18.116435] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:45.854 16:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 80146 00:31:46.113 [2024-11-05 16:01:18.302955] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:46.679 ************************************ 00:31:46.679 END TEST raid5f_state_function_test 00:31:46.679 ************************************ 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:31:46.679 00:31:46.679 real 0m7.960s 00:31:46.679 user 0m12.838s 00:31:46.679 sys 0m1.339s 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.679 16:01:18 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:31:46.679 16:01:18 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:31:46.679 16:01:18 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:46.679 16:01:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:46.679 ************************************ 00:31:46.679 START TEST raid5f_state_function_test_sb 00:31:46.679 ************************************ 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 true 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:46.679 Process raid pid: 80779 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80779 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80779' 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80779 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 80779 ']' 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:46.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:46.679 16:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:46.679 [2024-11-05 16:01:18.956073] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:31:46.679 [2024-11-05 16:01:18.956308] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:46.937 [2024-11-05 16:01:19.112465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:46.937 [2024-11-05 16:01:19.194524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:46.937 [2024-11-05 16:01:19.305286] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:46.937 [2024-11-05 16:01:19.305314] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:47.502 16:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:47.502 16:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:31:47.502 16:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:31:47.502 16:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.502 16:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:47.502 [2024-11-05 16:01:19.796866] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:47.502 [2024-11-05 16:01:19.796907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:47.502 [2024-11-05 16:01:19.796919] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:47.502 [2024-11-05 16:01:19.796927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:47.502 [2024-11-05 16:01:19.796931] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:47.502 [2024-11-05 16:01:19.796938] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:47.502 [2024-11-05 16:01:19.796943] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:31:47.502 [2024-11-05 16:01:19.796949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:31:47.502 16:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.502 16:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:31:47.502 16:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:47.502 16:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:47.502 16:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:47.502 16:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:47.502 16:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:47.502 16:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:47.502 16:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:47.502 16:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:47.502 16:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:47.502 16:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:47.502 16:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:47.502 16:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.502 16:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:47.502 16:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.502 16:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:47.502 "name": "Existed_Raid", 00:31:47.502 "uuid": "0c4f6367-4ae1-4753-975b-a6db4392c464", 00:31:47.502 "strip_size_kb": 64, 00:31:47.502 "state": "configuring", 00:31:47.502 "raid_level": "raid5f", 00:31:47.502 "superblock": true, 00:31:47.502 "num_base_bdevs": 4, 00:31:47.502 "num_base_bdevs_discovered": 0, 00:31:47.502 "num_base_bdevs_operational": 4, 00:31:47.502 "base_bdevs_list": [ 00:31:47.502 { 00:31:47.502 "name": "BaseBdev1", 00:31:47.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:47.503 "is_configured": false, 00:31:47.503 "data_offset": 0, 00:31:47.503 "data_size": 0 00:31:47.503 }, 00:31:47.503 { 00:31:47.503 "name": "BaseBdev2", 00:31:47.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:47.503 "is_configured": false, 00:31:47.503 "data_offset": 0, 00:31:47.503 "data_size": 0 00:31:47.503 }, 00:31:47.503 { 00:31:47.503 "name": "BaseBdev3", 00:31:47.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:47.503 "is_configured": false, 00:31:47.503 "data_offset": 0, 00:31:47.503 "data_size": 0 00:31:47.503 }, 00:31:47.503 { 00:31:47.503 "name": "BaseBdev4", 00:31:47.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:47.503 "is_configured": false, 00:31:47.503 "data_offset": 0, 00:31:47.503 "data_size": 0 00:31:47.503 } 00:31:47.503 ] 00:31:47.503 }' 00:31:47.503 16:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:47.503 16:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:47.760 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:47.760 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.760 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:47.760 [2024-11-05 16:01:20.124898] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:47.760 [2024-11-05 16:01:20.125032] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:31:47.760 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.760 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:31:47.760 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.760 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:47.760 [2024-11-05 16:01:20.132896] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:47.760 [2024-11-05 16:01:20.132929] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:47.760 [2024-11-05 16:01:20.132936] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:47.760 [2024-11-05 16:01:20.132943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:47.760 [2024-11-05 16:01:20.132949] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:47.760 [2024-11-05 16:01:20.132956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:47.760 [2024-11-05 16:01:20.132961] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:31:47.760 [2024-11-05 16:01:20.132967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:31:47.760 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.760 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:31:47.760 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.760 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:47.760 [2024-11-05 16:01:20.160482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:47.760 BaseBdev1 00:31:47.760 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.760 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:31:47.760 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:31:47.760 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:31:47.760 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:31:47.760 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:31:47.760 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:31:47.760 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:31:47.760 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.760 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:47.760 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.760 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:47.760 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.760 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:48.018 [ 00:31:48.018 { 00:31:48.018 "name": "BaseBdev1", 00:31:48.018 "aliases": [ 00:31:48.018 "44cf472e-21f2-4166-9d2f-82dbdc485fd7" 00:31:48.018 ], 00:31:48.018 "product_name": "Malloc disk", 00:31:48.018 "block_size": 512, 00:31:48.018 "num_blocks": 65536, 00:31:48.018 "uuid": "44cf472e-21f2-4166-9d2f-82dbdc485fd7", 00:31:48.018 "assigned_rate_limits": { 00:31:48.018 "rw_ios_per_sec": 0, 00:31:48.018 "rw_mbytes_per_sec": 0, 00:31:48.018 "r_mbytes_per_sec": 0, 00:31:48.018 "w_mbytes_per_sec": 0 00:31:48.018 }, 00:31:48.018 "claimed": true, 00:31:48.018 "claim_type": "exclusive_write", 00:31:48.018 "zoned": false, 00:31:48.018 "supported_io_types": { 00:31:48.018 "read": true, 00:31:48.018 "write": true, 00:31:48.018 "unmap": true, 00:31:48.018 "flush": true, 00:31:48.018 "reset": true, 00:31:48.018 "nvme_admin": false, 00:31:48.018 "nvme_io": false, 00:31:48.018 "nvme_io_md": false, 00:31:48.018 "write_zeroes": true, 00:31:48.018 "zcopy": true, 00:31:48.018 "get_zone_info": false, 00:31:48.018 "zone_management": false, 00:31:48.018 "zone_append": false, 00:31:48.018 "compare": false, 00:31:48.018 "compare_and_write": false, 00:31:48.018 "abort": true, 00:31:48.018 "seek_hole": false, 00:31:48.018 "seek_data": false, 00:31:48.018 "copy": true, 00:31:48.019 "nvme_iov_md": false 00:31:48.019 }, 00:31:48.019 "memory_domains": [ 00:31:48.019 { 00:31:48.019 "dma_device_id": "system", 00:31:48.019 "dma_device_type": 1 00:31:48.019 }, 00:31:48.019 { 00:31:48.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:48.019 "dma_device_type": 2 00:31:48.019 } 00:31:48.019 ], 00:31:48.019 "driver_specific": {} 00:31:48.019 } 00:31:48.019 ] 00:31:48.019 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.019 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:31:48.019 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:31:48.019 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:48.019 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:48.019 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:48.019 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:48.019 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:48.019 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:48.019 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:48.019 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:48.019 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:48.019 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:48.019 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:48.019 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.019 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:48.019 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.019 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:48.019 "name": "Existed_Raid", 00:31:48.019 "uuid": "1b708268-5ee6-41bb-85fc-5c38528a2ca6", 00:31:48.019 "strip_size_kb": 64, 00:31:48.019 "state": "configuring", 00:31:48.019 "raid_level": "raid5f", 00:31:48.019 "superblock": true, 00:31:48.019 "num_base_bdevs": 4, 00:31:48.019 "num_base_bdevs_discovered": 1, 00:31:48.019 "num_base_bdevs_operational": 4, 00:31:48.019 "base_bdevs_list": [ 00:31:48.019 { 00:31:48.019 "name": "BaseBdev1", 00:31:48.019 "uuid": "44cf472e-21f2-4166-9d2f-82dbdc485fd7", 00:31:48.019 "is_configured": true, 00:31:48.019 "data_offset": 2048, 00:31:48.019 "data_size": 63488 00:31:48.019 }, 00:31:48.019 { 00:31:48.019 "name": "BaseBdev2", 00:31:48.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:48.019 "is_configured": false, 00:31:48.019 "data_offset": 0, 00:31:48.019 "data_size": 0 00:31:48.019 }, 00:31:48.019 { 00:31:48.019 "name": "BaseBdev3", 00:31:48.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:48.019 "is_configured": false, 00:31:48.019 "data_offset": 0, 00:31:48.019 "data_size": 0 00:31:48.019 }, 00:31:48.019 { 00:31:48.019 "name": "BaseBdev4", 00:31:48.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:48.019 "is_configured": false, 00:31:48.019 "data_offset": 0, 00:31:48.019 "data_size": 0 00:31:48.019 } 00:31:48.019 ] 00:31:48.019 }' 00:31:48.019 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:48.019 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:48.277 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:48.277 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.277 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:48.277 [2024-11-05 16:01:20.496579] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:48.277 [2024-11-05 16:01:20.496619] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:31:48.277 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.277 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:31:48.277 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.277 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:48.277 [2024-11-05 16:01:20.504627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:48.277 [2024-11-05 16:01:20.506088] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:48.277 [2024-11-05 16:01:20.506120] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:48.277 [2024-11-05 16:01:20.506127] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:48.277 [2024-11-05 16:01:20.506135] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:48.277 [2024-11-05 16:01:20.506140] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:31:48.277 [2024-11-05 16:01:20.506147] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:31:48.277 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.277 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:31:48.277 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:48.277 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:31:48.277 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:48.277 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:48.277 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:48.277 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:48.277 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:48.277 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:48.277 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:48.277 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:48.277 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:48.277 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:48.277 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.277 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:48.277 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:48.277 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.277 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:48.277 "name": "Existed_Raid", 00:31:48.277 "uuid": "62ff62de-dae1-47e8-8e6c-83b4d927b68c", 00:31:48.277 "strip_size_kb": 64, 00:31:48.277 "state": "configuring", 00:31:48.277 "raid_level": "raid5f", 00:31:48.277 "superblock": true, 00:31:48.277 "num_base_bdevs": 4, 00:31:48.277 "num_base_bdevs_discovered": 1, 00:31:48.277 "num_base_bdevs_operational": 4, 00:31:48.277 "base_bdevs_list": [ 00:31:48.277 { 00:31:48.277 "name": "BaseBdev1", 00:31:48.278 "uuid": "44cf472e-21f2-4166-9d2f-82dbdc485fd7", 00:31:48.278 "is_configured": true, 00:31:48.278 "data_offset": 2048, 00:31:48.278 "data_size": 63488 00:31:48.278 }, 00:31:48.278 { 00:31:48.278 "name": "BaseBdev2", 00:31:48.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:48.278 "is_configured": false, 00:31:48.278 "data_offset": 0, 00:31:48.278 "data_size": 0 00:31:48.278 }, 00:31:48.278 { 00:31:48.278 "name": "BaseBdev3", 00:31:48.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:48.278 "is_configured": false, 00:31:48.278 "data_offset": 0, 00:31:48.278 "data_size": 0 00:31:48.278 }, 00:31:48.278 { 00:31:48.278 "name": "BaseBdev4", 00:31:48.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:48.278 "is_configured": false, 00:31:48.278 "data_offset": 0, 00:31:48.278 "data_size": 0 00:31:48.278 } 00:31:48.278 ] 00:31:48.278 }' 00:31:48.278 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:48.278 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:48.537 [2024-11-05 16:01:20.850736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:48.537 BaseBdev2 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:48.537 [ 00:31:48.537 { 00:31:48.537 "name": "BaseBdev2", 00:31:48.537 "aliases": [ 00:31:48.537 "11aa66ca-2913-4680-9418-4d249cb00bb0" 00:31:48.537 ], 00:31:48.537 "product_name": "Malloc disk", 00:31:48.537 "block_size": 512, 00:31:48.537 "num_blocks": 65536, 00:31:48.537 "uuid": "11aa66ca-2913-4680-9418-4d249cb00bb0", 00:31:48.537 "assigned_rate_limits": { 00:31:48.537 "rw_ios_per_sec": 0, 00:31:48.537 "rw_mbytes_per_sec": 0, 00:31:48.537 "r_mbytes_per_sec": 0, 00:31:48.537 "w_mbytes_per_sec": 0 00:31:48.537 }, 00:31:48.537 "claimed": true, 00:31:48.537 "claim_type": "exclusive_write", 00:31:48.537 "zoned": false, 00:31:48.537 "supported_io_types": { 00:31:48.537 "read": true, 00:31:48.537 "write": true, 00:31:48.537 "unmap": true, 00:31:48.537 "flush": true, 00:31:48.537 "reset": true, 00:31:48.537 "nvme_admin": false, 00:31:48.537 "nvme_io": false, 00:31:48.537 "nvme_io_md": false, 00:31:48.537 "write_zeroes": true, 00:31:48.537 "zcopy": true, 00:31:48.537 "get_zone_info": false, 00:31:48.537 "zone_management": false, 00:31:48.537 "zone_append": false, 00:31:48.537 "compare": false, 00:31:48.537 "compare_and_write": false, 00:31:48.537 "abort": true, 00:31:48.537 "seek_hole": false, 00:31:48.537 "seek_data": false, 00:31:48.537 "copy": true, 00:31:48.537 "nvme_iov_md": false 00:31:48.537 }, 00:31:48.537 "memory_domains": [ 00:31:48.537 { 00:31:48.537 "dma_device_id": "system", 00:31:48.537 "dma_device_type": 1 00:31:48.537 }, 00:31:48.537 { 00:31:48.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:48.537 "dma_device_type": 2 00:31:48.537 } 00:31:48.537 ], 00:31:48.537 "driver_specific": {} 00:31:48.537 } 00:31:48.537 ] 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:48.537 "name": "Existed_Raid", 00:31:48.537 "uuid": "62ff62de-dae1-47e8-8e6c-83b4d927b68c", 00:31:48.537 "strip_size_kb": 64, 00:31:48.537 "state": "configuring", 00:31:48.537 "raid_level": "raid5f", 00:31:48.537 "superblock": true, 00:31:48.537 "num_base_bdevs": 4, 00:31:48.537 "num_base_bdevs_discovered": 2, 00:31:48.537 "num_base_bdevs_operational": 4, 00:31:48.537 "base_bdevs_list": [ 00:31:48.537 { 00:31:48.537 "name": "BaseBdev1", 00:31:48.537 "uuid": "44cf472e-21f2-4166-9d2f-82dbdc485fd7", 00:31:48.537 "is_configured": true, 00:31:48.537 "data_offset": 2048, 00:31:48.537 "data_size": 63488 00:31:48.537 }, 00:31:48.537 { 00:31:48.537 "name": "BaseBdev2", 00:31:48.537 "uuid": "11aa66ca-2913-4680-9418-4d249cb00bb0", 00:31:48.537 "is_configured": true, 00:31:48.537 "data_offset": 2048, 00:31:48.537 "data_size": 63488 00:31:48.537 }, 00:31:48.537 { 00:31:48.537 "name": "BaseBdev3", 00:31:48.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:48.537 "is_configured": false, 00:31:48.537 "data_offset": 0, 00:31:48.537 "data_size": 0 00:31:48.537 }, 00:31:48.537 { 00:31:48.537 "name": "BaseBdev4", 00:31:48.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:48.537 "is_configured": false, 00:31:48.537 "data_offset": 0, 00:31:48.537 "data_size": 0 00:31:48.537 } 00:31:48.537 ] 00:31:48.537 }' 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:48.537 16:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:48.795 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:31:48.795 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.795 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:49.056 [2024-11-05 16:01:21.229597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:49.056 BaseBdev3 00:31:49.056 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.056 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:31:49.056 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:31:49.056 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:31:49.056 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:31:49.056 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:31:49.056 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:31:49.056 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:31:49.056 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.056 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:49.056 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.056 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:49.056 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.056 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:49.056 [ 00:31:49.056 { 00:31:49.056 "name": "BaseBdev3", 00:31:49.056 "aliases": [ 00:31:49.056 "557766aa-2199-4dbe-8c21-495a54a10553" 00:31:49.056 ], 00:31:49.056 "product_name": "Malloc disk", 00:31:49.056 "block_size": 512, 00:31:49.056 "num_blocks": 65536, 00:31:49.056 "uuid": "557766aa-2199-4dbe-8c21-495a54a10553", 00:31:49.056 "assigned_rate_limits": { 00:31:49.056 "rw_ios_per_sec": 0, 00:31:49.056 "rw_mbytes_per_sec": 0, 00:31:49.056 "r_mbytes_per_sec": 0, 00:31:49.056 "w_mbytes_per_sec": 0 00:31:49.056 }, 00:31:49.056 "claimed": true, 00:31:49.056 "claim_type": "exclusive_write", 00:31:49.056 "zoned": false, 00:31:49.056 "supported_io_types": { 00:31:49.056 "read": true, 00:31:49.056 "write": true, 00:31:49.056 "unmap": true, 00:31:49.056 "flush": true, 00:31:49.056 "reset": true, 00:31:49.056 "nvme_admin": false, 00:31:49.056 "nvme_io": false, 00:31:49.056 "nvme_io_md": false, 00:31:49.056 "write_zeroes": true, 00:31:49.056 "zcopy": true, 00:31:49.056 "get_zone_info": false, 00:31:49.056 "zone_management": false, 00:31:49.056 "zone_append": false, 00:31:49.056 "compare": false, 00:31:49.056 "compare_and_write": false, 00:31:49.056 "abort": true, 00:31:49.056 "seek_hole": false, 00:31:49.056 "seek_data": false, 00:31:49.056 "copy": true, 00:31:49.056 "nvme_iov_md": false 00:31:49.056 }, 00:31:49.056 "memory_domains": [ 00:31:49.056 { 00:31:49.056 "dma_device_id": "system", 00:31:49.056 "dma_device_type": 1 00:31:49.056 }, 00:31:49.056 { 00:31:49.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:49.056 "dma_device_type": 2 00:31:49.056 } 00:31:49.056 ], 00:31:49.056 "driver_specific": {} 00:31:49.056 } 00:31:49.056 ] 00:31:49.056 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.056 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:31:49.056 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:31:49.056 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:49.056 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:31:49.056 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:49.056 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:49.056 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:49.056 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:49.056 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:49.056 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:49.056 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:49.056 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:49.056 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:49.056 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:49.056 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:49.056 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.056 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:49.056 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.056 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:49.056 "name": "Existed_Raid", 00:31:49.056 "uuid": "62ff62de-dae1-47e8-8e6c-83b4d927b68c", 00:31:49.056 "strip_size_kb": 64, 00:31:49.056 "state": "configuring", 00:31:49.056 "raid_level": "raid5f", 00:31:49.056 "superblock": true, 00:31:49.056 "num_base_bdevs": 4, 00:31:49.056 "num_base_bdevs_discovered": 3, 00:31:49.056 "num_base_bdevs_operational": 4, 00:31:49.056 "base_bdevs_list": [ 00:31:49.056 { 00:31:49.056 "name": "BaseBdev1", 00:31:49.056 "uuid": "44cf472e-21f2-4166-9d2f-82dbdc485fd7", 00:31:49.056 "is_configured": true, 00:31:49.056 "data_offset": 2048, 00:31:49.056 "data_size": 63488 00:31:49.056 }, 00:31:49.056 { 00:31:49.056 "name": "BaseBdev2", 00:31:49.056 "uuid": "11aa66ca-2913-4680-9418-4d249cb00bb0", 00:31:49.056 "is_configured": true, 00:31:49.056 "data_offset": 2048, 00:31:49.056 "data_size": 63488 00:31:49.056 }, 00:31:49.056 { 00:31:49.056 "name": "BaseBdev3", 00:31:49.056 "uuid": "557766aa-2199-4dbe-8c21-495a54a10553", 00:31:49.056 "is_configured": true, 00:31:49.056 "data_offset": 2048, 00:31:49.056 "data_size": 63488 00:31:49.056 }, 00:31:49.056 { 00:31:49.056 "name": "BaseBdev4", 00:31:49.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:49.056 "is_configured": false, 00:31:49.056 "data_offset": 0, 00:31:49.056 "data_size": 0 00:31:49.056 } 00:31:49.056 ] 00:31:49.056 }' 00:31:49.056 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:49.056 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:49.319 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:31:49.319 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.319 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:49.319 [2024-11-05 16:01:21.599497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:31:49.319 [2024-11-05 16:01:21.599679] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:31:49.319 [2024-11-05 16:01:21.599690] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:31:49.319 [2024-11-05 16:01:21.599912] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:31:49.319 BaseBdev4 00:31:49.319 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.319 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:31:49.319 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:31:49.319 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:31:49.319 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:31:49.319 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:31:49.319 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:31:49.319 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:31:49.319 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.319 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:49.319 [2024-11-05 16:01:21.603700] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:31:49.319 [2024-11-05 16:01:21.603718] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:31:49.319 [2024-11-05 16:01:21.603908] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:49.319 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.319 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:31:49.319 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.319 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:49.319 [ 00:31:49.319 { 00:31:49.319 "name": "BaseBdev4", 00:31:49.319 "aliases": [ 00:31:49.319 "3bb1dea4-488e-4617-8c48-50c8bdbbd2a9" 00:31:49.319 ], 00:31:49.319 "product_name": "Malloc disk", 00:31:49.319 "block_size": 512, 00:31:49.319 "num_blocks": 65536, 00:31:49.319 "uuid": "3bb1dea4-488e-4617-8c48-50c8bdbbd2a9", 00:31:49.319 "assigned_rate_limits": { 00:31:49.319 "rw_ios_per_sec": 0, 00:31:49.319 "rw_mbytes_per_sec": 0, 00:31:49.319 "r_mbytes_per_sec": 0, 00:31:49.319 "w_mbytes_per_sec": 0 00:31:49.319 }, 00:31:49.319 "claimed": true, 00:31:49.319 "claim_type": "exclusive_write", 00:31:49.319 "zoned": false, 00:31:49.319 "supported_io_types": { 00:31:49.319 "read": true, 00:31:49.319 "write": true, 00:31:49.319 "unmap": true, 00:31:49.319 "flush": true, 00:31:49.319 "reset": true, 00:31:49.319 "nvme_admin": false, 00:31:49.319 "nvme_io": false, 00:31:49.319 "nvme_io_md": false, 00:31:49.319 "write_zeroes": true, 00:31:49.319 "zcopy": true, 00:31:49.319 "get_zone_info": false, 00:31:49.319 "zone_management": false, 00:31:49.319 "zone_append": false, 00:31:49.319 "compare": false, 00:31:49.319 "compare_and_write": false, 00:31:49.319 "abort": true, 00:31:49.319 "seek_hole": false, 00:31:49.319 "seek_data": false, 00:31:49.319 "copy": true, 00:31:49.319 "nvme_iov_md": false 00:31:49.319 }, 00:31:49.319 "memory_domains": [ 00:31:49.319 { 00:31:49.319 "dma_device_id": "system", 00:31:49.319 "dma_device_type": 1 00:31:49.319 }, 00:31:49.319 { 00:31:49.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:49.319 "dma_device_type": 2 00:31:49.319 } 00:31:49.319 ], 00:31:49.319 "driver_specific": {} 00:31:49.319 } 00:31:49.319 ] 00:31:49.319 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.319 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:31:49.319 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:31:49.319 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:49.319 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:31:49.319 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:49.319 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:49.319 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:49.319 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:49.319 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:49.319 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:49.319 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:49.319 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:49.319 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:49.319 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:49.319 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:49.319 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.319 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:49.319 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.319 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:49.319 "name": "Existed_Raid", 00:31:49.319 "uuid": "62ff62de-dae1-47e8-8e6c-83b4d927b68c", 00:31:49.319 "strip_size_kb": 64, 00:31:49.319 "state": "online", 00:31:49.319 "raid_level": "raid5f", 00:31:49.319 "superblock": true, 00:31:49.319 "num_base_bdevs": 4, 00:31:49.319 "num_base_bdevs_discovered": 4, 00:31:49.319 "num_base_bdevs_operational": 4, 00:31:49.320 "base_bdevs_list": [ 00:31:49.320 { 00:31:49.320 "name": "BaseBdev1", 00:31:49.320 "uuid": "44cf472e-21f2-4166-9d2f-82dbdc485fd7", 00:31:49.320 "is_configured": true, 00:31:49.320 "data_offset": 2048, 00:31:49.320 "data_size": 63488 00:31:49.320 }, 00:31:49.320 { 00:31:49.320 "name": "BaseBdev2", 00:31:49.320 "uuid": "11aa66ca-2913-4680-9418-4d249cb00bb0", 00:31:49.320 "is_configured": true, 00:31:49.320 "data_offset": 2048, 00:31:49.320 "data_size": 63488 00:31:49.320 }, 00:31:49.320 { 00:31:49.320 "name": "BaseBdev3", 00:31:49.320 "uuid": "557766aa-2199-4dbe-8c21-495a54a10553", 00:31:49.320 "is_configured": true, 00:31:49.320 "data_offset": 2048, 00:31:49.320 "data_size": 63488 00:31:49.320 }, 00:31:49.320 { 00:31:49.320 "name": "BaseBdev4", 00:31:49.320 "uuid": "3bb1dea4-488e-4617-8c48-50c8bdbbd2a9", 00:31:49.320 "is_configured": true, 00:31:49.320 "data_offset": 2048, 00:31:49.320 "data_size": 63488 00:31:49.320 } 00:31:49.320 ] 00:31:49.320 }' 00:31:49.320 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:49.320 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:49.579 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:31:49.579 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:31:49.579 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:49.579 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:49.579 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:31:49.579 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:49.579 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:31:49.579 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.579 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:49.579 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:49.579 [2024-11-05 16:01:21.968243] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:49.579 16:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.579 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:49.579 "name": "Existed_Raid", 00:31:49.579 "aliases": [ 00:31:49.579 "62ff62de-dae1-47e8-8e6c-83b4d927b68c" 00:31:49.579 ], 00:31:49.579 "product_name": "Raid Volume", 00:31:49.579 "block_size": 512, 00:31:49.579 "num_blocks": 190464, 00:31:49.579 "uuid": "62ff62de-dae1-47e8-8e6c-83b4d927b68c", 00:31:49.579 "assigned_rate_limits": { 00:31:49.579 "rw_ios_per_sec": 0, 00:31:49.579 "rw_mbytes_per_sec": 0, 00:31:49.579 "r_mbytes_per_sec": 0, 00:31:49.579 "w_mbytes_per_sec": 0 00:31:49.579 }, 00:31:49.579 "claimed": false, 00:31:49.579 "zoned": false, 00:31:49.579 "supported_io_types": { 00:31:49.579 "read": true, 00:31:49.579 "write": true, 00:31:49.579 "unmap": false, 00:31:49.579 "flush": false, 00:31:49.579 "reset": true, 00:31:49.579 "nvme_admin": false, 00:31:49.579 "nvme_io": false, 00:31:49.579 "nvme_io_md": false, 00:31:49.579 "write_zeroes": true, 00:31:49.579 "zcopy": false, 00:31:49.579 "get_zone_info": false, 00:31:49.579 "zone_management": false, 00:31:49.579 "zone_append": false, 00:31:49.579 "compare": false, 00:31:49.579 "compare_and_write": false, 00:31:49.579 "abort": false, 00:31:49.579 "seek_hole": false, 00:31:49.579 "seek_data": false, 00:31:49.579 "copy": false, 00:31:49.579 "nvme_iov_md": false 00:31:49.579 }, 00:31:49.579 "driver_specific": { 00:31:49.579 "raid": { 00:31:49.579 "uuid": "62ff62de-dae1-47e8-8e6c-83b4d927b68c", 00:31:49.579 "strip_size_kb": 64, 00:31:49.579 "state": "online", 00:31:49.579 "raid_level": "raid5f", 00:31:49.579 "superblock": true, 00:31:49.579 "num_base_bdevs": 4, 00:31:49.579 "num_base_bdevs_discovered": 4, 00:31:49.579 "num_base_bdevs_operational": 4, 00:31:49.579 "base_bdevs_list": [ 00:31:49.579 { 00:31:49.579 "name": "BaseBdev1", 00:31:49.579 "uuid": "44cf472e-21f2-4166-9d2f-82dbdc485fd7", 00:31:49.579 "is_configured": true, 00:31:49.579 "data_offset": 2048, 00:31:49.579 "data_size": 63488 00:31:49.579 }, 00:31:49.579 { 00:31:49.579 "name": "BaseBdev2", 00:31:49.579 "uuid": "11aa66ca-2913-4680-9418-4d249cb00bb0", 00:31:49.579 "is_configured": true, 00:31:49.579 "data_offset": 2048, 00:31:49.579 "data_size": 63488 00:31:49.579 }, 00:31:49.579 { 00:31:49.579 "name": "BaseBdev3", 00:31:49.579 "uuid": "557766aa-2199-4dbe-8c21-495a54a10553", 00:31:49.579 "is_configured": true, 00:31:49.579 "data_offset": 2048, 00:31:49.579 "data_size": 63488 00:31:49.579 }, 00:31:49.579 { 00:31:49.579 "name": "BaseBdev4", 00:31:49.579 "uuid": "3bb1dea4-488e-4617-8c48-50c8bdbbd2a9", 00:31:49.579 "is_configured": true, 00:31:49.579 "data_offset": 2048, 00:31:49.579 "data_size": 63488 00:31:49.579 } 00:31:49.579 ] 00:31:49.579 } 00:31:49.579 } 00:31:49.579 }' 00:31:49.579 16:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:31:49.837 BaseBdev2 00:31:49.837 BaseBdev3 00:31:49.837 BaseBdev4' 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:49.837 [2024-11-05 16:01:22.200137] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:31:49.837 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:31:49.838 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:31:49.838 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:49.838 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:49.838 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:49.838 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:49.838 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:49.838 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:50.096 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:50.096 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:50.096 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:50.096 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:50.096 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.096 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:50.096 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:50.096 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.096 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:50.096 "name": "Existed_Raid", 00:31:50.096 "uuid": "62ff62de-dae1-47e8-8e6c-83b4d927b68c", 00:31:50.096 "strip_size_kb": 64, 00:31:50.096 "state": "online", 00:31:50.096 "raid_level": "raid5f", 00:31:50.096 "superblock": true, 00:31:50.096 "num_base_bdevs": 4, 00:31:50.096 "num_base_bdevs_discovered": 3, 00:31:50.096 "num_base_bdevs_operational": 3, 00:31:50.096 "base_bdevs_list": [ 00:31:50.096 { 00:31:50.096 "name": null, 00:31:50.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:50.096 "is_configured": false, 00:31:50.096 "data_offset": 0, 00:31:50.096 "data_size": 63488 00:31:50.096 }, 00:31:50.096 { 00:31:50.096 "name": "BaseBdev2", 00:31:50.096 "uuid": "11aa66ca-2913-4680-9418-4d249cb00bb0", 00:31:50.096 "is_configured": true, 00:31:50.096 "data_offset": 2048, 00:31:50.096 "data_size": 63488 00:31:50.096 }, 00:31:50.096 { 00:31:50.096 "name": "BaseBdev3", 00:31:50.096 "uuid": "557766aa-2199-4dbe-8c21-495a54a10553", 00:31:50.096 "is_configured": true, 00:31:50.096 "data_offset": 2048, 00:31:50.096 "data_size": 63488 00:31:50.096 }, 00:31:50.096 { 00:31:50.096 "name": "BaseBdev4", 00:31:50.096 "uuid": "3bb1dea4-488e-4617-8c48-50c8bdbbd2a9", 00:31:50.096 "is_configured": true, 00:31:50.096 "data_offset": 2048, 00:31:50.096 "data_size": 63488 00:31:50.096 } 00:31:50.096 ] 00:31:50.096 }' 00:31:50.096 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:50.096 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:50.354 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:31:50.354 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:50.354 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:31:50.354 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:50.354 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.354 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:50.354 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.354 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:31:50.354 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:50.354 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:31:50.354 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.354 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:50.354 [2024-11-05 16:01:22.597918] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:50.354 [2024-11-05 16:01:22.598124] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:50.354 [2024-11-05 16:01:22.642448] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:50.354 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.354 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:31:50.354 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:50.354 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:50.355 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.355 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:50.355 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:31:50.355 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.355 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:31:50.355 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:50.355 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:31:50.355 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.355 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:50.355 [2024-11-05 16:01:22.682472] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:31:50.355 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.355 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:31:50.355 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:50.355 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:50.355 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.355 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:50.355 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:31:50.355 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.355 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:31:50.355 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:50.355 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:31:50.355 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.355 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:50.355 [2024-11-05 16:01:22.766940] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:31:50.355 [2024-11-05 16:01:22.766975] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:31:50.615 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.615 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:31:50.615 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:50.615 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:50.615 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:31:50.615 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.615 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:50.615 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.615 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:31:50.615 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:31:50.615 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:50.616 BaseBdev2 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:50.616 [ 00:31:50.616 { 00:31:50.616 "name": "BaseBdev2", 00:31:50.616 "aliases": [ 00:31:50.616 "de4517c7-1089-4e5e-9cc4-f08b312952c6" 00:31:50.616 ], 00:31:50.616 "product_name": "Malloc disk", 00:31:50.616 "block_size": 512, 00:31:50.616 "num_blocks": 65536, 00:31:50.616 "uuid": "de4517c7-1089-4e5e-9cc4-f08b312952c6", 00:31:50.616 "assigned_rate_limits": { 00:31:50.616 "rw_ios_per_sec": 0, 00:31:50.616 "rw_mbytes_per_sec": 0, 00:31:50.616 "r_mbytes_per_sec": 0, 00:31:50.616 "w_mbytes_per_sec": 0 00:31:50.616 }, 00:31:50.616 "claimed": false, 00:31:50.616 "zoned": false, 00:31:50.616 "supported_io_types": { 00:31:50.616 "read": true, 00:31:50.616 "write": true, 00:31:50.616 "unmap": true, 00:31:50.616 "flush": true, 00:31:50.616 "reset": true, 00:31:50.616 "nvme_admin": false, 00:31:50.616 "nvme_io": false, 00:31:50.616 "nvme_io_md": false, 00:31:50.616 "write_zeroes": true, 00:31:50.616 "zcopy": true, 00:31:50.616 "get_zone_info": false, 00:31:50.616 "zone_management": false, 00:31:50.616 "zone_append": false, 00:31:50.616 "compare": false, 00:31:50.616 "compare_and_write": false, 00:31:50.616 "abort": true, 00:31:50.616 "seek_hole": false, 00:31:50.616 "seek_data": false, 00:31:50.616 "copy": true, 00:31:50.616 "nvme_iov_md": false 00:31:50.616 }, 00:31:50.616 "memory_domains": [ 00:31:50.616 { 00:31:50.616 "dma_device_id": "system", 00:31:50.616 "dma_device_type": 1 00:31:50.616 }, 00:31:50.616 { 00:31:50.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:50.616 "dma_device_type": 2 00:31:50.616 } 00:31:50.616 ], 00:31:50.616 "driver_specific": {} 00:31:50.616 } 00:31:50.616 ] 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:50.616 BaseBdev3 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:50.616 [ 00:31:50.616 { 00:31:50.616 "name": "BaseBdev3", 00:31:50.616 "aliases": [ 00:31:50.616 "02c7d821-55f0-44ae-9f5e-a0c44b1d7381" 00:31:50.616 ], 00:31:50.616 "product_name": "Malloc disk", 00:31:50.616 "block_size": 512, 00:31:50.616 "num_blocks": 65536, 00:31:50.616 "uuid": "02c7d821-55f0-44ae-9f5e-a0c44b1d7381", 00:31:50.616 "assigned_rate_limits": { 00:31:50.616 "rw_ios_per_sec": 0, 00:31:50.616 "rw_mbytes_per_sec": 0, 00:31:50.616 "r_mbytes_per_sec": 0, 00:31:50.616 "w_mbytes_per_sec": 0 00:31:50.616 }, 00:31:50.616 "claimed": false, 00:31:50.616 "zoned": false, 00:31:50.616 "supported_io_types": { 00:31:50.616 "read": true, 00:31:50.616 "write": true, 00:31:50.616 "unmap": true, 00:31:50.616 "flush": true, 00:31:50.616 "reset": true, 00:31:50.616 "nvme_admin": false, 00:31:50.616 "nvme_io": false, 00:31:50.616 "nvme_io_md": false, 00:31:50.616 "write_zeroes": true, 00:31:50.616 "zcopy": true, 00:31:50.616 "get_zone_info": false, 00:31:50.616 "zone_management": false, 00:31:50.616 "zone_append": false, 00:31:50.616 "compare": false, 00:31:50.616 "compare_and_write": false, 00:31:50.616 "abort": true, 00:31:50.616 "seek_hole": false, 00:31:50.616 "seek_data": false, 00:31:50.616 "copy": true, 00:31:50.616 "nvme_iov_md": false 00:31:50.616 }, 00:31:50.616 "memory_domains": [ 00:31:50.616 { 00:31:50.616 "dma_device_id": "system", 00:31:50.616 "dma_device_type": 1 00:31:50.616 }, 00:31:50.616 { 00:31:50.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:50.616 "dma_device_type": 2 00:31:50.616 } 00:31:50.616 ], 00:31:50.616 "driver_specific": {} 00:31:50.616 } 00:31:50.616 ] 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:50.616 BaseBdev4 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.616 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:50.616 [ 00:31:50.616 { 00:31:50.616 "name": "BaseBdev4", 00:31:50.616 "aliases": [ 00:31:50.616 "de1b08ba-1abf-42ec-9fc5-b5cfa842c6bc" 00:31:50.616 ], 00:31:50.616 "product_name": "Malloc disk", 00:31:50.616 "block_size": 512, 00:31:50.616 "num_blocks": 65536, 00:31:50.616 "uuid": "de1b08ba-1abf-42ec-9fc5-b5cfa842c6bc", 00:31:50.616 "assigned_rate_limits": { 00:31:50.616 "rw_ios_per_sec": 0, 00:31:50.616 "rw_mbytes_per_sec": 0, 00:31:50.616 "r_mbytes_per_sec": 0, 00:31:50.616 "w_mbytes_per_sec": 0 00:31:50.616 }, 00:31:50.616 "claimed": false, 00:31:50.616 "zoned": false, 00:31:50.616 "supported_io_types": { 00:31:50.616 "read": true, 00:31:50.616 "write": true, 00:31:50.616 "unmap": true, 00:31:50.616 "flush": true, 00:31:50.616 "reset": true, 00:31:50.617 "nvme_admin": false, 00:31:50.617 "nvme_io": false, 00:31:50.617 "nvme_io_md": false, 00:31:50.617 "write_zeroes": true, 00:31:50.617 "zcopy": true, 00:31:50.617 "get_zone_info": false, 00:31:50.617 "zone_management": false, 00:31:50.617 "zone_append": false, 00:31:50.617 "compare": false, 00:31:50.617 "compare_and_write": false, 00:31:50.617 "abort": true, 00:31:50.617 "seek_hole": false, 00:31:50.617 "seek_data": false, 00:31:50.617 "copy": true, 00:31:50.617 "nvme_iov_md": false 00:31:50.617 }, 00:31:50.617 "memory_domains": [ 00:31:50.617 { 00:31:50.617 "dma_device_id": "system", 00:31:50.617 "dma_device_type": 1 00:31:50.617 }, 00:31:50.617 { 00:31:50.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:50.617 "dma_device_type": 2 00:31:50.617 } 00:31:50.617 ], 00:31:50.617 "driver_specific": {} 00:31:50.617 } 00:31:50.617 ] 00:31:50.617 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.617 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:31:50.617 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:31:50.617 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:50.617 16:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:31:50.617 16:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.617 16:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:50.617 [2024-11-05 16:01:23.006205] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:50.617 [2024-11-05 16:01:23.006318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:50.617 [2024-11-05 16:01:23.006374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:50.617 [2024-11-05 16:01:23.007882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:50.617 [2024-11-05 16:01:23.007985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:31:50.617 16:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.617 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:31:50.617 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:50.617 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:50.617 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:50.617 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:50.617 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:50.617 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:50.617 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:50.617 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:50.617 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:50.617 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:50.617 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:50.617 16:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.617 16:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:50.617 16:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.876 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:50.876 "name": "Existed_Raid", 00:31:50.876 "uuid": "98f33fbd-0fd3-4fca-b4de-d063a0293ff9", 00:31:50.876 "strip_size_kb": 64, 00:31:50.876 "state": "configuring", 00:31:50.876 "raid_level": "raid5f", 00:31:50.876 "superblock": true, 00:31:50.876 "num_base_bdevs": 4, 00:31:50.876 "num_base_bdevs_discovered": 3, 00:31:50.876 "num_base_bdevs_operational": 4, 00:31:50.876 "base_bdevs_list": [ 00:31:50.876 { 00:31:50.876 "name": "BaseBdev1", 00:31:50.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:50.876 "is_configured": false, 00:31:50.876 "data_offset": 0, 00:31:50.876 "data_size": 0 00:31:50.876 }, 00:31:50.876 { 00:31:50.876 "name": "BaseBdev2", 00:31:50.876 "uuid": "de4517c7-1089-4e5e-9cc4-f08b312952c6", 00:31:50.876 "is_configured": true, 00:31:50.876 "data_offset": 2048, 00:31:50.876 "data_size": 63488 00:31:50.876 }, 00:31:50.876 { 00:31:50.876 "name": "BaseBdev3", 00:31:50.876 "uuid": "02c7d821-55f0-44ae-9f5e-a0c44b1d7381", 00:31:50.876 "is_configured": true, 00:31:50.876 "data_offset": 2048, 00:31:50.876 "data_size": 63488 00:31:50.876 }, 00:31:50.876 { 00:31:50.876 "name": "BaseBdev4", 00:31:50.876 "uuid": "de1b08ba-1abf-42ec-9fc5-b5cfa842c6bc", 00:31:50.876 "is_configured": true, 00:31:50.876 "data_offset": 2048, 00:31:50.876 "data_size": 63488 00:31:50.876 } 00:31:50.876 ] 00:31:50.876 }' 00:31:50.876 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:50.876 16:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:51.134 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:31:51.134 16:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.134 16:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:51.134 [2024-11-05 16:01:23.334263] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:51.134 16:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.134 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:31:51.134 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:51.134 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:51.134 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:51.134 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:51.134 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:51.134 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:51.134 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:51.134 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:51.134 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:51.134 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:51.134 16:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.134 16:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:51.135 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:51.135 16:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.135 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:51.135 "name": "Existed_Raid", 00:31:51.135 "uuid": "98f33fbd-0fd3-4fca-b4de-d063a0293ff9", 00:31:51.135 "strip_size_kb": 64, 00:31:51.135 "state": "configuring", 00:31:51.135 "raid_level": "raid5f", 00:31:51.135 "superblock": true, 00:31:51.135 "num_base_bdevs": 4, 00:31:51.135 "num_base_bdevs_discovered": 2, 00:31:51.135 "num_base_bdevs_operational": 4, 00:31:51.135 "base_bdevs_list": [ 00:31:51.135 { 00:31:51.135 "name": "BaseBdev1", 00:31:51.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:51.135 "is_configured": false, 00:31:51.135 "data_offset": 0, 00:31:51.135 "data_size": 0 00:31:51.135 }, 00:31:51.135 { 00:31:51.135 "name": null, 00:31:51.135 "uuid": "de4517c7-1089-4e5e-9cc4-f08b312952c6", 00:31:51.135 "is_configured": false, 00:31:51.135 "data_offset": 0, 00:31:51.135 "data_size": 63488 00:31:51.135 }, 00:31:51.135 { 00:31:51.135 "name": "BaseBdev3", 00:31:51.135 "uuid": "02c7d821-55f0-44ae-9f5e-a0c44b1d7381", 00:31:51.135 "is_configured": true, 00:31:51.135 "data_offset": 2048, 00:31:51.135 "data_size": 63488 00:31:51.135 }, 00:31:51.135 { 00:31:51.135 "name": "BaseBdev4", 00:31:51.135 "uuid": "de1b08ba-1abf-42ec-9fc5-b5cfa842c6bc", 00:31:51.135 "is_configured": true, 00:31:51.135 "data_offset": 2048, 00:31:51.135 "data_size": 63488 00:31:51.135 } 00:31:51.135 ] 00:31:51.135 }' 00:31:51.135 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:51.135 16:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:51.396 [2024-11-05 16:01:23.724303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:51.396 BaseBdev1 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:51.396 [ 00:31:51.396 { 00:31:51.396 "name": "BaseBdev1", 00:31:51.396 "aliases": [ 00:31:51.396 "a44d3a00-08a5-4977-b85a-563af315a2fb" 00:31:51.396 ], 00:31:51.396 "product_name": "Malloc disk", 00:31:51.396 "block_size": 512, 00:31:51.396 "num_blocks": 65536, 00:31:51.396 "uuid": "a44d3a00-08a5-4977-b85a-563af315a2fb", 00:31:51.396 "assigned_rate_limits": { 00:31:51.396 "rw_ios_per_sec": 0, 00:31:51.396 "rw_mbytes_per_sec": 0, 00:31:51.396 "r_mbytes_per_sec": 0, 00:31:51.396 "w_mbytes_per_sec": 0 00:31:51.396 }, 00:31:51.396 "claimed": true, 00:31:51.396 "claim_type": "exclusive_write", 00:31:51.396 "zoned": false, 00:31:51.396 "supported_io_types": { 00:31:51.396 "read": true, 00:31:51.396 "write": true, 00:31:51.396 "unmap": true, 00:31:51.396 "flush": true, 00:31:51.396 "reset": true, 00:31:51.396 "nvme_admin": false, 00:31:51.396 "nvme_io": false, 00:31:51.396 "nvme_io_md": false, 00:31:51.396 "write_zeroes": true, 00:31:51.396 "zcopy": true, 00:31:51.396 "get_zone_info": false, 00:31:51.396 "zone_management": false, 00:31:51.396 "zone_append": false, 00:31:51.396 "compare": false, 00:31:51.396 "compare_and_write": false, 00:31:51.396 "abort": true, 00:31:51.396 "seek_hole": false, 00:31:51.396 "seek_data": false, 00:31:51.396 "copy": true, 00:31:51.396 "nvme_iov_md": false 00:31:51.396 }, 00:31:51.396 "memory_domains": [ 00:31:51.396 { 00:31:51.396 "dma_device_id": "system", 00:31:51.396 "dma_device_type": 1 00:31:51.396 }, 00:31:51.396 { 00:31:51.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:51.396 "dma_device_type": 2 00:31:51.396 } 00:31:51.396 ], 00:31:51.396 "driver_specific": {} 00:31:51.396 } 00:31:51.396 ] 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.396 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:51.396 "name": "Existed_Raid", 00:31:51.396 "uuid": "98f33fbd-0fd3-4fca-b4de-d063a0293ff9", 00:31:51.396 "strip_size_kb": 64, 00:31:51.396 "state": "configuring", 00:31:51.396 "raid_level": "raid5f", 00:31:51.396 "superblock": true, 00:31:51.396 "num_base_bdevs": 4, 00:31:51.396 "num_base_bdevs_discovered": 3, 00:31:51.397 "num_base_bdevs_operational": 4, 00:31:51.397 "base_bdevs_list": [ 00:31:51.397 { 00:31:51.397 "name": "BaseBdev1", 00:31:51.397 "uuid": "a44d3a00-08a5-4977-b85a-563af315a2fb", 00:31:51.397 "is_configured": true, 00:31:51.397 "data_offset": 2048, 00:31:51.397 "data_size": 63488 00:31:51.397 }, 00:31:51.397 { 00:31:51.397 "name": null, 00:31:51.397 "uuid": "de4517c7-1089-4e5e-9cc4-f08b312952c6", 00:31:51.397 "is_configured": false, 00:31:51.397 "data_offset": 0, 00:31:51.397 "data_size": 63488 00:31:51.397 }, 00:31:51.397 { 00:31:51.397 "name": "BaseBdev3", 00:31:51.397 "uuid": "02c7d821-55f0-44ae-9f5e-a0c44b1d7381", 00:31:51.397 "is_configured": true, 00:31:51.397 "data_offset": 2048, 00:31:51.397 "data_size": 63488 00:31:51.397 }, 00:31:51.397 { 00:31:51.397 "name": "BaseBdev4", 00:31:51.397 "uuid": "de1b08ba-1abf-42ec-9fc5-b5cfa842c6bc", 00:31:51.397 "is_configured": true, 00:31:51.397 "data_offset": 2048, 00:31:51.397 "data_size": 63488 00:31:51.397 } 00:31:51.397 ] 00:31:51.397 }' 00:31:51.397 16:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:51.397 16:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:51.658 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:51.658 16:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.658 16:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:51.658 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:31:51.919 16:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.919 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:31:51.919 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:31:51.919 16:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.919 16:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:51.919 [2024-11-05 16:01:24.112434] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:31:51.919 16:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.919 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:31:51.919 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:51.919 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:51.919 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:51.919 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:51.919 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:51.919 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:51.919 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:51.919 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:51.919 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:51.919 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:51.919 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:51.919 16:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.919 16:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:51.919 16:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.919 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:51.919 "name": "Existed_Raid", 00:31:51.919 "uuid": "98f33fbd-0fd3-4fca-b4de-d063a0293ff9", 00:31:51.919 "strip_size_kb": 64, 00:31:51.919 "state": "configuring", 00:31:51.919 "raid_level": "raid5f", 00:31:51.919 "superblock": true, 00:31:51.919 "num_base_bdevs": 4, 00:31:51.919 "num_base_bdevs_discovered": 2, 00:31:51.919 "num_base_bdevs_operational": 4, 00:31:51.919 "base_bdevs_list": [ 00:31:51.919 { 00:31:51.919 "name": "BaseBdev1", 00:31:51.919 "uuid": "a44d3a00-08a5-4977-b85a-563af315a2fb", 00:31:51.919 "is_configured": true, 00:31:51.919 "data_offset": 2048, 00:31:51.919 "data_size": 63488 00:31:51.919 }, 00:31:51.919 { 00:31:51.919 "name": null, 00:31:51.919 "uuid": "de4517c7-1089-4e5e-9cc4-f08b312952c6", 00:31:51.919 "is_configured": false, 00:31:51.919 "data_offset": 0, 00:31:51.919 "data_size": 63488 00:31:51.919 }, 00:31:51.919 { 00:31:51.919 "name": null, 00:31:51.919 "uuid": "02c7d821-55f0-44ae-9f5e-a0c44b1d7381", 00:31:51.919 "is_configured": false, 00:31:51.919 "data_offset": 0, 00:31:51.919 "data_size": 63488 00:31:51.919 }, 00:31:51.919 { 00:31:51.919 "name": "BaseBdev4", 00:31:51.919 "uuid": "de1b08ba-1abf-42ec-9fc5-b5cfa842c6bc", 00:31:51.919 "is_configured": true, 00:31:51.919 "data_offset": 2048, 00:31:51.919 "data_size": 63488 00:31:51.919 } 00:31:51.919 ] 00:31:51.919 }' 00:31:51.919 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:51.919 16:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:52.182 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:52.182 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:31:52.182 16:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.182 16:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:52.182 16:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.182 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:31:52.182 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:31:52.182 16:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.182 16:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:52.182 [2024-11-05 16:01:24.456483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:52.182 16:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.182 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:31:52.182 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:52.182 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:52.182 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:52.182 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:52.182 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:52.182 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:52.182 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:52.182 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:52.182 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:52.182 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:52.182 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:52.182 16:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.182 16:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:52.182 16:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.182 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:52.182 "name": "Existed_Raid", 00:31:52.182 "uuid": "98f33fbd-0fd3-4fca-b4de-d063a0293ff9", 00:31:52.182 "strip_size_kb": 64, 00:31:52.182 "state": "configuring", 00:31:52.182 "raid_level": "raid5f", 00:31:52.182 "superblock": true, 00:31:52.182 "num_base_bdevs": 4, 00:31:52.182 "num_base_bdevs_discovered": 3, 00:31:52.182 "num_base_bdevs_operational": 4, 00:31:52.182 "base_bdevs_list": [ 00:31:52.182 { 00:31:52.182 "name": "BaseBdev1", 00:31:52.182 "uuid": "a44d3a00-08a5-4977-b85a-563af315a2fb", 00:31:52.182 "is_configured": true, 00:31:52.182 "data_offset": 2048, 00:31:52.182 "data_size": 63488 00:31:52.182 }, 00:31:52.182 { 00:31:52.182 "name": null, 00:31:52.182 "uuid": "de4517c7-1089-4e5e-9cc4-f08b312952c6", 00:31:52.182 "is_configured": false, 00:31:52.182 "data_offset": 0, 00:31:52.182 "data_size": 63488 00:31:52.182 }, 00:31:52.182 { 00:31:52.182 "name": "BaseBdev3", 00:31:52.182 "uuid": "02c7d821-55f0-44ae-9f5e-a0c44b1d7381", 00:31:52.182 "is_configured": true, 00:31:52.182 "data_offset": 2048, 00:31:52.182 "data_size": 63488 00:31:52.182 }, 00:31:52.182 { 00:31:52.182 "name": "BaseBdev4", 00:31:52.182 "uuid": "de1b08ba-1abf-42ec-9fc5-b5cfa842c6bc", 00:31:52.182 "is_configured": true, 00:31:52.182 "data_offset": 2048, 00:31:52.182 "data_size": 63488 00:31:52.182 } 00:31:52.182 ] 00:31:52.182 }' 00:31:52.182 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:52.182 16:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:52.444 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:31:52.444 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:52.444 16:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.444 16:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:52.444 16:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.444 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:31:52.444 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:31:52.444 16:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.444 16:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:52.444 [2024-11-05 16:01:24.828573] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:52.703 16:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.703 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:31:52.703 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:52.703 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:52.703 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:52.703 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:52.703 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:52.703 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:52.703 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:52.703 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:52.703 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:52.703 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:52.703 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:52.703 16:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.703 16:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:52.703 16:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.703 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:52.703 "name": "Existed_Raid", 00:31:52.703 "uuid": "98f33fbd-0fd3-4fca-b4de-d063a0293ff9", 00:31:52.703 "strip_size_kb": 64, 00:31:52.703 "state": "configuring", 00:31:52.703 "raid_level": "raid5f", 00:31:52.703 "superblock": true, 00:31:52.703 "num_base_bdevs": 4, 00:31:52.703 "num_base_bdevs_discovered": 2, 00:31:52.703 "num_base_bdevs_operational": 4, 00:31:52.703 "base_bdevs_list": [ 00:31:52.703 { 00:31:52.703 "name": null, 00:31:52.703 "uuid": "a44d3a00-08a5-4977-b85a-563af315a2fb", 00:31:52.703 "is_configured": false, 00:31:52.703 "data_offset": 0, 00:31:52.703 "data_size": 63488 00:31:52.703 }, 00:31:52.703 { 00:31:52.703 "name": null, 00:31:52.703 "uuid": "de4517c7-1089-4e5e-9cc4-f08b312952c6", 00:31:52.703 "is_configured": false, 00:31:52.703 "data_offset": 0, 00:31:52.703 "data_size": 63488 00:31:52.703 }, 00:31:52.703 { 00:31:52.703 "name": "BaseBdev3", 00:31:52.703 "uuid": "02c7d821-55f0-44ae-9f5e-a0c44b1d7381", 00:31:52.703 "is_configured": true, 00:31:52.703 "data_offset": 2048, 00:31:52.703 "data_size": 63488 00:31:52.703 }, 00:31:52.703 { 00:31:52.703 "name": "BaseBdev4", 00:31:52.703 "uuid": "de1b08ba-1abf-42ec-9fc5-b5cfa842c6bc", 00:31:52.703 "is_configured": true, 00:31:52.703 "data_offset": 2048, 00:31:52.703 "data_size": 63488 00:31:52.703 } 00:31:52.703 ] 00:31:52.704 }' 00:31:52.704 16:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:52.704 16:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:52.964 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:52.964 16:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.964 16:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:52.964 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:31:52.964 16:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.964 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:31:52.964 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:31:52.964 16:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.964 16:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:52.964 [2024-11-05 16:01:25.215212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:52.964 16:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.964 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:31:52.964 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:52.964 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:52.964 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:52.964 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:52.964 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:52.964 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:52.964 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:52.964 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:52.964 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:52.964 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:52.964 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:52.964 16:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.964 16:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:52.964 16:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.964 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:52.964 "name": "Existed_Raid", 00:31:52.964 "uuid": "98f33fbd-0fd3-4fca-b4de-d063a0293ff9", 00:31:52.964 "strip_size_kb": 64, 00:31:52.964 "state": "configuring", 00:31:52.964 "raid_level": "raid5f", 00:31:52.964 "superblock": true, 00:31:52.964 "num_base_bdevs": 4, 00:31:52.964 "num_base_bdevs_discovered": 3, 00:31:52.964 "num_base_bdevs_operational": 4, 00:31:52.964 "base_bdevs_list": [ 00:31:52.964 { 00:31:52.964 "name": null, 00:31:52.964 "uuid": "a44d3a00-08a5-4977-b85a-563af315a2fb", 00:31:52.964 "is_configured": false, 00:31:52.964 "data_offset": 0, 00:31:52.964 "data_size": 63488 00:31:52.964 }, 00:31:52.964 { 00:31:52.964 "name": "BaseBdev2", 00:31:52.964 "uuid": "de4517c7-1089-4e5e-9cc4-f08b312952c6", 00:31:52.964 "is_configured": true, 00:31:52.964 "data_offset": 2048, 00:31:52.964 "data_size": 63488 00:31:52.964 }, 00:31:52.964 { 00:31:52.964 "name": "BaseBdev3", 00:31:52.964 "uuid": "02c7d821-55f0-44ae-9f5e-a0c44b1d7381", 00:31:52.964 "is_configured": true, 00:31:52.964 "data_offset": 2048, 00:31:52.964 "data_size": 63488 00:31:52.964 }, 00:31:52.964 { 00:31:52.964 "name": "BaseBdev4", 00:31:52.964 "uuid": "de1b08ba-1abf-42ec-9fc5-b5cfa842c6bc", 00:31:52.964 "is_configured": true, 00:31:52.964 "data_offset": 2048, 00:31:52.964 "data_size": 63488 00:31:52.964 } 00:31:52.964 ] 00:31:52.964 }' 00:31:52.964 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:52.964 16:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:53.225 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:31:53.225 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:53.225 16:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.225 16:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:53.225 16:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.225 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:31:53.225 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:53.225 16:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.225 16:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:53.225 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:31:53.225 16:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.225 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a44d3a00-08a5-4977-b85a-563af315a2fb 00:31:53.225 16:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.225 16:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:53.485 [2024-11-05 16:01:25.661397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:31:53.485 [2024-11-05 16:01:25.661573] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:31:53.485 [2024-11-05 16:01:25.661583] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:31:53.485 [2024-11-05 16:01:25.661784] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:31:53.485 NewBaseBdev 00:31:53.485 16:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.485 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:31:53.485 16:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:31:53.485 16:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:31:53.485 16:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:31:53.485 16:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:31:53.485 16:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:31:53.485 16:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:31:53.485 16:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.485 16:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:53.485 [2024-11-05 16:01:25.665602] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:31:53.485 [2024-11-05 16:01:25.665620] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:31:53.485 [2024-11-05 16:01:25.665735] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:53.485 16:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.485 16:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:31:53.486 16:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.486 16:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:53.486 [ 00:31:53.486 { 00:31:53.486 "name": "NewBaseBdev", 00:31:53.486 "aliases": [ 00:31:53.486 "a44d3a00-08a5-4977-b85a-563af315a2fb" 00:31:53.486 ], 00:31:53.486 "product_name": "Malloc disk", 00:31:53.486 "block_size": 512, 00:31:53.486 "num_blocks": 65536, 00:31:53.486 "uuid": "a44d3a00-08a5-4977-b85a-563af315a2fb", 00:31:53.486 "assigned_rate_limits": { 00:31:53.486 "rw_ios_per_sec": 0, 00:31:53.486 "rw_mbytes_per_sec": 0, 00:31:53.486 "r_mbytes_per_sec": 0, 00:31:53.486 "w_mbytes_per_sec": 0 00:31:53.486 }, 00:31:53.486 "claimed": true, 00:31:53.486 "claim_type": "exclusive_write", 00:31:53.486 "zoned": false, 00:31:53.486 "supported_io_types": { 00:31:53.486 "read": true, 00:31:53.486 "write": true, 00:31:53.486 "unmap": true, 00:31:53.486 "flush": true, 00:31:53.486 "reset": true, 00:31:53.486 "nvme_admin": false, 00:31:53.486 "nvme_io": false, 00:31:53.486 "nvme_io_md": false, 00:31:53.486 "write_zeroes": true, 00:31:53.486 "zcopy": true, 00:31:53.486 "get_zone_info": false, 00:31:53.486 "zone_management": false, 00:31:53.486 "zone_append": false, 00:31:53.486 "compare": false, 00:31:53.486 "compare_and_write": false, 00:31:53.486 "abort": true, 00:31:53.486 "seek_hole": false, 00:31:53.486 "seek_data": false, 00:31:53.486 "copy": true, 00:31:53.486 "nvme_iov_md": false 00:31:53.486 }, 00:31:53.486 "memory_domains": [ 00:31:53.486 { 00:31:53.486 "dma_device_id": "system", 00:31:53.486 "dma_device_type": 1 00:31:53.486 }, 00:31:53.486 { 00:31:53.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:53.486 "dma_device_type": 2 00:31:53.486 } 00:31:53.486 ], 00:31:53.486 "driver_specific": {} 00:31:53.486 } 00:31:53.486 ] 00:31:53.486 16:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.486 16:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:31:53.486 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:31:53.486 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:53.486 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:53.486 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:53.486 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:53.486 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:53.486 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:53.486 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:53.486 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:53.486 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:53.486 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:53.486 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:53.486 16:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.486 16:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:53.486 16:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.486 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:53.486 "name": "Existed_Raid", 00:31:53.486 "uuid": "98f33fbd-0fd3-4fca-b4de-d063a0293ff9", 00:31:53.486 "strip_size_kb": 64, 00:31:53.486 "state": "online", 00:31:53.486 "raid_level": "raid5f", 00:31:53.486 "superblock": true, 00:31:53.486 "num_base_bdevs": 4, 00:31:53.486 "num_base_bdevs_discovered": 4, 00:31:53.486 "num_base_bdevs_operational": 4, 00:31:53.486 "base_bdevs_list": [ 00:31:53.486 { 00:31:53.486 "name": "NewBaseBdev", 00:31:53.486 "uuid": "a44d3a00-08a5-4977-b85a-563af315a2fb", 00:31:53.486 "is_configured": true, 00:31:53.486 "data_offset": 2048, 00:31:53.486 "data_size": 63488 00:31:53.486 }, 00:31:53.486 { 00:31:53.486 "name": "BaseBdev2", 00:31:53.486 "uuid": "de4517c7-1089-4e5e-9cc4-f08b312952c6", 00:31:53.486 "is_configured": true, 00:31:53.486 "data_offset": 2048, 00:31:53.486 "data_size": 63488 00:31:53.486 }, 00:31:53.486 { 00:31:53.486 "name": "BaseBdev3", 00:31:53.486 "uuid": "02c7d821-55f0-44ae-9f5e-a0c44b1d7381", 00:31:53.486 "is_configured": true, 00:31:53.486 "data_offset": 2048, 00:31:53.486 "data_size": 63488 00:31:53.486 }, 00:31:53.486 { 00:31:53.486 "name": "BaseBdev4", 00:31:53.486 "uuid": "de1b08ba-1abf-42ec-9fc5-b5cfa842c6bc", 00:31:53.486 "is_configured": true, 00:31:53.486 "data_offset": 2048, 00:31:53.486 "data_size": 63488 00:31:53.486 } 00:31:53.486 ] 00:31:53.486 }' 00:31:53.486 16:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:53.486 16:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:53.748 16:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:31:53.748 16:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:31:53.748 16:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:53.748 16:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:53.748 16:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:31:53.748 16:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:53.748 16:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:53.748 16:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:31:53.748 16:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.748 16:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:53.748 [2024-11-05 16:01:26.062238] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:53.748 16:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.748 16:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:53.748 "name": "Existed_Raid", 00:31:53.748 "aliases": [ 00:31:53.748 "98f33fbd-0fd3-4fca-b4de-d063a0293ff9" 00:31:53.748 ], 00:31:53.748 "product_name": "Raid Volume", 00:31:53.748 "block_size": 512, 00:31:53.748 "num_blocks": 190464, 00:31:53.748 "uuid": "98f33fbd-0fd3-4fca-b4de-d063a0293ff9", 00:31:53.748 "assigned_rate_limits": { 00:31:53.748 "rw_ios_per_sec": 0, 00:31:53.748 "rw_mbytes_per_sec": 0, 00:31:53.748 "r_mbytes_per_sec": 0, 00:31:53.748 "w_mbytes_per_sec": 0 00:31:53.748 }, 00:31:53.748 "claimed": false, 00:31:53.748 "zoned": false, 00:31:53.748 "supported_io_types": { 00:31:53.748 "read": true, 00:31:53.748 "write": true, 00:31:53.748 "unmap": false, 00:31:53.748 "flush": false, 00:31:53.748 "reset": true, 00:31:53.748 "nvme_admin": false, 00:31:53.748 "nvme_io": false, 00:31:53.748 "nvme_io_md": false, 00:31:53.748 "write_zeroes": true, 00:31:53.748 "zcopy": false, 00:31:53.748 "get_zone_info": false, 00:31:53.748 "zone_management": false, 00:31:53.748 "zone_append": false, 00:31:53.748 "compare": false, 00:31:53.748 "compare_and_write": false, 00:31:53.748 "abort": false, 00:31:53.748 "seek_hole": false, 00:31:53.748 "seek_data": false, 00:31:53.748 "copy": false, 00:31:53.748 "nvme_iov_md": false 00:31:53.748 }, 00:31:53.748 "driver_specific": { 00:31:53.748 "raid": { 00:31:53.748 "uuid": "98f33fbd-0fd3-4fca-b4de-d063a0293ff9", 00:31:53.748 "strip_size_kb": 64, 00:31:53.748 "state": "online", 00:31:53.748 "raid_level": "raid5f", 00:31:53.748 "superblock": true, 00:31:53.748 "num_base_bdevs": 4, 00:31:53.748 "num_base_bdevs_discovered": 4, 00:31:53.748 "num_base_bdevs_operational": 4, 00:31:53.748 "base_bdevs_list": [ 00:31:53.748 { 00:31:53.748 "name": "NewBaseBdev", 00:31:53.748 "uuid": "a44d3a00-08a5-4977-b85a-563af315a2fb", 00:31:53.748 "is_configured": true, 00:31:53.748 "data_offset": 2048, 00:31:53.748 "data_size": 63488 00:31:53.748 }, 00:31:53.748 { 00:31:53.748 "name": "BaseBdev2", 00:31:53.748 "uuid": "de4517c7-1089-4e5e-9cc4-f08b312952c6", 00:31:53.748 "is_configured": true, 00:31:53.748 "data_offset": 2048, 00:31:53.748 "data_size": 63488 00:31:53.748 }, 00:31:53.748 { 00:31:53.748 "name": "BaseBdev3", 00:31:53.748 "uuid": "02c7d821-55f0-44ae-9f5e-a0c44b1d7381", 00:31:53.748 "is_configured": true, 00:31:53.748 "data_offset": 2048, 00:31:53.748 "data_size": 63488 00:31:53.748 }, 00:31:53.748 { 00:31:53.748 "name": "BaseBdev4", 00:31:53.748 "uuid": "de1b08ba-1abf-42ec-9fc5-b5cfa842c6bc", 00:31:53.748 "is_configured": true, 00:31:53.748 "data_offset": 2048, 00:31:53.748 "data_size": 63488 00:31:53.748 } 00:31:53.748 ] 00:31:53.748 } 00:31:53.748 } 00:31:53.748 }' 00:31:53.748 16:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:53.748 16:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:31:53.748 BaseBdev2 00:31:53.748 BaseBdev3 00:31:53.748 BaseBdev4' 00:31:53.748 16:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:53.748 16:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:53.748 16:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:53.748 16:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:31:53.748 16:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:53.748 16:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.748 16:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:53.748 16:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:54.010 [2024-11-05 16:01:26.254069] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:54.010 [2024-11-05 16:01:26.254094] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:54.010 [2024-11-05 16:01:26.254149] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:54.010 [2024-11-05 16:01:26.254381] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:54.010 [2024-11-05 16:01:26.254389] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80779 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 80779 ']' 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 80779 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80779 00:31:54.010 killing process with pid 80779 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80779' 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 80779 00:31:54.010 [2024-11-05 16:01:26.282994] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:54.010 16:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 80779 00:31:54.271 [2024-11-05 16:01:26.476816] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:54.841 16:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:31:54.841 00:31:54.841 real 0m8.138s 00:31:54.841 user 0m13.202s 00:31:54.841 sys 0m1.325s 00:31:54.841 16:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:54.841 ************************************ 00:31:54.841 END TEST raid5f_state_function_test_sb 00:31:54.841 ************************************ 00:31:54.841 16:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:54.841 16:01:27 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:31:54.841 16:01:27 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:54.841 16:01:27 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:54.841 16:01:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:54.841 ************************************ 00:31:54.841 START TEST raid5f_superblock_test 00:31:54.841 ************************************ 00:31:54.841 16:01:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 4 00:31:54.841 16:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:31:54.841 16:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:31:54.841 16:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:31:54.841 16:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:31:54.841 16:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:31:54.841 16:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:31:54.841 16:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:31:54.841 16:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:31:54.841 16:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:31:54.841 16:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:31:54.841 16:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:31:54.841 16:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:31:54.841 16:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:31:54.841 16:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:31:54.841 16:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:31:54.841 16:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:31:54.841 16:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81416 00:31:54.841 16:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81416 00:31:54.841 16:01:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 81416 ']' 00:31:54.841 16:01:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:54.841 16:01:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:54.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:54.841 16:01:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:54.841 16:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:31:54.841 16:01:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:54.841 16:01:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:54.841 [2024-11-05 16:01:27.135677] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:31:54.841 [2024-11-05 16:01:27.135800] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81416 ] 00:31:55.100 [2024-11-05 16:01:27.292339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:55.100 [2024-11-05 16:01:27.374948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:55.100 [2024-11-05 16:01:27.484945] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:55.101 [2024-11-05 16:01:27.484984] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:55.667 16:01:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:55.667 16:01:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:31:55.667 16:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:31:55.667 16:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:55.667 16:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:31:55.667 16:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:31:55.667 16:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:31:55.667 16:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:55.667 16:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:31:55.667 16:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:55.667 16:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:31:55.667 16:01:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.667 16:01:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:55.667 malloc1 00:31:55.667 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.667 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:55.667 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.667 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:55.667 [2024-11-05 16:01:28.008554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:55.667 [2024-11-05 16:01:28.008602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:55.667 [2024-11-05 16:01:28.008623] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:31:55.667 [2024-11-05 16:01:28.008635] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:55.667 [2024-11-05 16:01:28.010384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:55.667 [2024-11-05 16:01:28.010411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:55.667 pt1 00:31:55.667 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.667 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:31:55.667 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:55.667 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:31:55.667 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:31:55.667 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:31:55.667 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:55.667 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:31:55.667 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:55.667 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:31:55.667 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.667 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:55.667 malloc2 00:31:55.667 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.667 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:55.667 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.667 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:55.667 [2024-11-05 16:01:28.049141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:55.667 [2024-11-05 16:01:28.049177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:55.667 [2024-11-05 16:01:28.049193] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:31:55.667 [2024-11-05 16:01:28.049201] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:55.667 [2024-11-05 16:01:28.050990] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:55.667 [2024-11-05 16:01:28.051015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:55.667 pt2 00:31:55.667 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.667 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:31:55.667 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:55.667 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:31:55.667 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:31:55.667 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:31:55.667 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:55.667 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:31:55.667 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:55.667 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:31:55.667 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.667 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:55.926 malloc3 00:31:55.926 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.926 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:55.926 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.926 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:55.926 [2024-11-05 16:01:28.103907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:55.926 [2024-11-05 16:01:28.103947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:55.926 [2024-11-05 16:01:28.103965] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:31:55.926 [2024-11-05 16:01:28.103973] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:55.926 [2024-11-05 16:01:28.105714] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:55.926 [2024-11-05 16:01:28.105738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:55.926 pt3 00:31:55.926 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.926 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:31:55.926 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:55.926 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:31:55.926 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:31:55.926 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:31:55.926 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:55.926 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:31:55.927 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:55.927 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:31:55.927 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.927 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:55.927 malloc4 00:31:55.927 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.927 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:31:55.927 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.927 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:55.927 [2024-11-05 16:01:28.144265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:31:55.927 [2024-11-05 16:01:28.144300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:55.927 [2024-11-05 16:01:28.144313] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:31:55.927 [2024-11-05 16:01:28.144321] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:55.927 [2024-11-05 16:01:28.146031] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:55.927 [2024-11-05 16:01:28.146055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:31:55.927 pt4 00:31:55.927 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.927 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:31:55.927 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:55.927 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:31:55.927 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.927 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:55.927 [2024-11-05 16:01:28.156299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:55.927 [2024-11-05 16:01:28.157831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:55.927 [2024-11-05 16:01:28.157909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:55.927 [2024-11-05 16:01:28.157963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:31:55.927 [2024-11-05 16:01:28.158111] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:31:55.927 [2024-11-05 16:01:28.158129] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:31:55.927 [2024-11-05 16:01:28.158332] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:31:55.927 [2024-11-05 16:01:28.162272] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:31:55.927 [2024-11-05 16:01:28.162293] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:31:55.927 [2024-11-05 16:01:28.162433] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:55.927 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.927 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:31:55.927 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:55.927 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:55.927 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:55.927 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:55.927 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:55.927 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:55.927 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:55.927 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:55.927 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:55.927 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:55.927 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:55.927 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.927 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:55.927 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.927 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:55.927 "name": "raid_bdev1", 00:31:55.927 "uuid": "e38452c2-eb42-4285-8251-ab0ba89f71df", 00:31:55.927 "strip_size_kb": 64, 00:31:55.927 "state": "online", 00:31:55.927 "raid_level": "raid5f", 00:31:55.927 "superblock": true, 00:31:55.927 "num_base_bdevs": 4, 00:31:55.927 "num_base_bdevs_discovered": 4, 00:31:55.927 "num_base_bdevs_operational": 4, 00:31:55.927 "base_bdevs_list": [ 00:31:55.927 { 00:31:55.927 "name": "pt1", 00:31:55.927 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:55.927 "is_configured": true, 00:31:55.927 "data_offset": 2048, 00:31:55.927 "data_size": 63488 00:31:55.927 }, 00:31:55.927 { 00:31:55.927 "name": "pt2", 00:31:55.927 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:55.927 "is_configured": true, 00:31:55.927 "data_offset": 2048, 00:31:55.927 "data_size": 63488 00:31:55.927 }, 00:31:55.927 { 00:31:55.927 "name": "pt3", 00:31:55.927 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:55.927 "is_configured": true, 00:31:55.927 "data_offset": 2048, 00:31:55.927 "data_size": 63488 00:31:55.927 }, 00:31:55.927 { 00:31:55.927 "name": "pt4", 00:31:55.927 "uuid": "00000000-0000-0000-0000-000000000004", 00:31:55.927 "is_configured": true, 00:31:55.927 "data_offset": 2048, 00:31:55.927 "data_size": 63488 00:31:55.927 } 00:31:55.927 ] 00:31:55.927 }' 00:31:55.927 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:55.927 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:56.188 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:31:56.188 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:31:56.188 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:56.188 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:56.188 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:31:56.188 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:56.188 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:56.188 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:56.188 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.188 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:56.188 [2024-11-05 16:01:28.463069] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:56.188 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.188 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:56.188 "name": "raid_bdev1", 00:31:56.188 "aliases": [ 00:31:56.188 "e38452c2-eb42-4285-8251-ab0ba89f71df" 00:31:56.188 ], 00:31:56.188 "product_name": "Raid Volume", 00:31:56.188 "block_size": 512, 00:31:56.188 "num_blocks": 190464, 00:31:56.188 "uuid": "e38452c2-eb42-4285-8251-ab0ba89f71df", 00:31:56.188 "assigned_rate_limits": { 00:31:56.188 "rw_ios_per_sec": 0, 00:31:56.188 "rw_mbytes_per_sec": 0, 00:31:56.188 "r_mbytes_per_sec": 0, 00:31:56.188 "w_mbytes_per_sec": 0 00:31:56.188 }, 00:31:56.188 "claimed": false, 00:31:56.188 "zoned": false, 00:31:56.188 "supported_io_types": { 00:31:56.188 "read": true, 00:31:56.188 "write": true, 00:31:56.188 "unmap": false, 00:31:56.188 "flush": false, 00:31:56.188 "reset": true, 00:31:56.188 "nvme_admin": false, 00:31:56.188 "nvme_io": false, 00:31:56.188 "nvme_io_md": false, 00:31:56.188 "write_zeroes": true, 00:31:56.188 "zcopy": false, 00:31:56.188 "get_zone_info": false, 00:31:56.188 "zone_management": false, 00:31:56.188 "zone_append": false, 00:31:56.188 "compare": false, 00:31:56.188 "compare_and_write": false, 00:31:56.188 "abort": false, 00:31:56.188 "seek_hole": false, 00:31:56.188 "seek_data": false, 00:31:56.188 "copy": false, 00:31:56.188 "nvme_iov_md": false 00:31:56.188 }, 00:31:56.188 "driver_specific": { 00:31:56.188 "raid": { 00:31:56.188 "uuid": "e38452c2-eb42-4285-8251-ab0ba89f71df", 00:31:56.188 "strip_size_kb": 64, 00:31:56.188 "state": "online", 00:31:56.188 "raid_level": "raid5f", 00:31:56.188 "superblock": true, 00:31:56.188 "num_base_bdevs": 4, 00:31:56.188 "num_base_bdevs_discovered": 4, 00:31:56.188 "num_base_bdevs_operational": 4, 00:31:56.188 "base_bdevs_list": [ 00:31:56.188 { 00:31:56.188 "name": "pt1", 00:31:56.188 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:56.188 "is_configured": true, 00:31:56.188 "data_offset": 2048, 00:31:56.188 "data_size": 63488 00:31:56.188 }, 00:31:56.188 { 00:31:56.188 "name": "pt2", 00:31:56.188 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:56.188 "is_configured": true, 00:31:56.188 "data_offset": 2048, 00:31:56.188 "data_size": 63488 00:31:56.188 }, 00:31:56.188 { 00:31:56.188 "name": "pt3", 00:31:56.188 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:56.188 "is_configured": true, 00:31:56.188 "data_offset": 2048, 00:31:56.188 "data_size": 63488 00:31:56.188 }, 00:31:56.188 { 00:31:56.188 "name": "pt4", 00:31:56.188 "uuid": "00000000-0000-0000-0000-000000000004", 00:31:56.188 "is_configured": true, 00:31:56.188 "data_offset": 2048, 00:31:56.188 "data_size": 63488 00:31:56.188 } 00:31:56.188 ] 00:31:56.188 } 00:31:56.188 } 00:31:56.188 }' 00:31:56.188 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:56.188 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:31:56.188 pt2 00:31:56.188 pt3 00:31:56.188 pt4' 00:31:56.188 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:56.188 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:56.188 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:56.188 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:31:56.188 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.188 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:56.188 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:56.188 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.188 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:56.188 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:56.188 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:56.188 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:31:56.188 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.188 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:56.188 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:56.188 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:31:56.448 [2024-11-05 16:01:28.675069] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e38452c2-eb42-4285-8251-ab0ba89f71df 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e38452c2-eb42-4285-8251-ab0ba89f71df ']' 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:56.448 [2024-11-05 16:01:28.702911] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:56.448 [2024-11-05 16:01:28.702932] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:56.448 [2024-11-05 16:01:28.702990] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:56.448 [2024-11-05 16:01:28.703059] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:56.448 [2024-11-05 16:01:28.703070] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:31:56.448 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.449 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:56.449 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.449 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:31:56.449 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.449 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:56.449 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:31:56.449 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.449 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:31:56.449 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:31:56.449 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:31:56.449 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:31:56.449 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:56.449 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:56.449 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:56.449 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:56.449 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:31:56.449 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.449 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:56.449 [2024-11-05 16:01:28.818976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:31:56.449 [2024-11-05 16:01:28.820564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:31:56.449 [2024-11-05 16:01:28.820609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:31:56.449 [2024-11-05 16:01:28.820637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:31:56.449 [2024-11-05 16:01:28.820677] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:31:56.449 [2024-11-05 16:01:28.820712] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:31:56.449 [2024-11-05 16:01:28.820728] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:31:56.449 [2024-11-05 16:01:28.820743] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:31:56.449 [2024-11-05 16:01:28.820753] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:56.449 [2024-11-05 16:01:28.820766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:31:56.449 request: 00:31:56.449 { 00:31:56.449 "name": "raid_bdev1", 00:31:56.449 "raid_level": "raid5f", 00:31:56.449 "base_bdevs": [ 00:31:56.449 "malloc1", 00:31:56.449 "malloc2", 00:31:56.449 "malloc3", 00:31:56.449 "malloc4" 00:31:56.449 ], 00:31:56.449 "strip_size_kb": 64, 00:31:56.449 "superblock": false, 00:31:56.449 "method": "bdev_raid_create", 00:31:56.449 "req_id": 1 00:31:56.449 } 00:31:56.449 Got JSON-RPC error response 00:31:56.449 response: 00:31:56.449 { 00:31:56.449 "code": -17, 00:31:56.449 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:31:56.449 } 00:31:56.449 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:56.449 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:31:56.449 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:56.449 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:56.449 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:56.449 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:56.449 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.449 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:56.449 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:31:56.449 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.449 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:31:56.449 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:31:56.449 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:56.449 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.449 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:56.449 [2024-11-05 16:01:28.862942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:56.449 [2024-11-05 16:01:28.862975] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:56.449 [2024-11-05 16:01:28.862986] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:31:56.449 [2024-11-05 16:01:28.862994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:56.709 [2024-11-05 16:01:28.864797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:56.709 [2024-11-05 16:01:28.864827] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:56.709 [2024-11-05 16:01:28.864890] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:56.709 [2024-11-05 16:01:28.864933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:56.709 pt1 00:31:56.709 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.709 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:31:56.709 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:56.709 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:56.709 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:56.709 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:56.709 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:56.709 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:56.709 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:56.709 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:56.709 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:56.709 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:56.709 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:56.709 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.709 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:56.709 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.709 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:56.709 "name": "raid_bdev1", 00:31:56.709 "uuid": "e38452c2-eb42-4285-8251-ab0ba89f71df", 00:31:56.709 "strip_size_kb": 64, 00:31:56.709 "state": "configuring", 00:31:56.709 "raid_level": "raid5f", 00:31:56.709 "superblock": true, 00:31:56.709 "num_base_bdevs": 4, 00:31:56.709 "num_base_bdevs_discovered": 1, 00:31:56.709 "num_base_bdevs_operational": 4, 00:31:56.709 "base_bdevs_list": [ 00:31:56.709 { 00:31:56.709 "name": "pt1", 00:31:56.709 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:56.709 "is_configured": true, 00:31:56.709 "data_offset": 2048, 00:31:56.709 "data_size": 63488 00:31:56.709 }, 00:31:56.709 { 00:31:56.709 "name": null, 00:31:56.709 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:56.709 "is_configured": false, 00:31:56.709 "data_offset": 2048, 00:31:56.709 "data_size": 63488 00:31:56.709 }, 00:31:56.709 { 00:31:56.709 "name": null, 00:31:56.709 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:56.709 "is_configured": false, 00:31:56.709 "data_offset": 2048, 00:31:56.709 "data_size": 63488 00:31:56.709 }, 00:31:56.709 { 00:31:56.709 "name": null, 00:31:56.709 "uuid": "00000000-0000-0000-0000-000000000004", 00:31:56.709 "is_configured": false, 00:31:56.709 "data_offset": 2048, 00:31:56.709 "data_size": 63488 00:31:56.710 } 00:31:56.710 ] 00:31:56.710 }' 00:31:56.710 16:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:56.710 16:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:56.971 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:31:56.971 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:56.971 16:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.971 16:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:56.971 [2024-11-05 16:01:29.183047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:56.971 [2024-11-05 16:01:29.183100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:56.971 [2024-11-05 16:01:29.183115] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:31:56.971 [2024-11-05 16:01:29.183123] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:56.971 [2024-11-05 16:01:29.183457] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:56.971 [2024-11-05 16:01:29.183474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:56.971 [2024-11-05 16:01:29.183531] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:56.971 [2024-11-05 16:01:29.183548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:56.971 pt2 00:31:56.971 16:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.971 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:31:56.971 16:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.971 16:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:56.971 [2024-11-05 16:01:29.191049] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:31:56.971 16:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.971 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:31:56.971 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:56.971 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:56.971 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:56.971 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:56.971 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:56.971 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:56.971 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:56.971 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:56.971 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:56.971 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:56.971 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:56.971 16:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.971 16:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:56.971 16:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.971 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:56.971 "name": "raid_bdev1", 00:31:56.971 "uuid": "e38452c2-eb42-4285-8251-ab0ba89f71df", 00:31:56.971 "strip_size_kb": 64, 00:31:56.971 "state": "configuring", 00:31:56.971 "raid_level": "raid5f", 00:31:56.971 "superblock": true, 00:31:56.971 "num_base_bdevs": 4, 00:31:56.971 "num_base_bdevs_discovered": 1, 00:31:56.971 "num_base_bdevs_operational": 4, 00:31:56.971 "base_bdevs_list": [ 00:31:56.971 { 00:31:56.971 "name": "pt1", 00:31:56.971 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:56.971 "is_configured": true, 00:31:56.971 "data_offset": 2048, 00:31:56.971 "data_size": 63488 00:31:56.971 }, 00:31:56.971 { 00:31:56.971 "name": null, 00:31:56.971 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:56.971 "is_configured": false, 00:31:56.971 "data_offset": 0, 00:31:56.971 "data_size": 63488 00:31:56.971 }, 00:31:56.971 { 00:31:56.971 "name": null, 00:31:56.971 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:56.971 "is_configured": false, 00:31:56.971 "data_offset": 2048, 00:31:56.971 "data_size": 63488 00:31:56.971 }, 00:31:56.971 { 00:31:56.971 "name": null, 00:31:56.971 "uuid": "00000000-0000-0000-0000-000000000004", 00:31:56.971 "is_configured": false, 00:31:56.971 "data_offset": 2048, 00:31:56.971 "data_size": 63488 00:31:56.971 } 00:31:56.971 ] 00:31:56.971 }' 00:31:56.971 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:56.971 16:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:57.233 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:31:57.233 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:31:57.233 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:57.233 16:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.233 16:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:57.233 [2024-11-05 16:01:29.523110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:57.233 [2024-11-05 16:01:29.523153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:57.233 [2024-11-05 16:01:29.523167] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:31:57.233 [2024-11-05 16:01:29.523173] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:57.233 [2024-11-05 16:01:29.523511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:57.233 [2024-11-05 16:01:29.523521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:57.233 [2024-11-05 16:01:29.523578] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:57.233 [2024-11-05 16:01:29.523593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:57.233 pt2 00:31:57.233 16:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.233 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:31:57.233 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:31:57.233 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:57.233 16:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.233 16:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:57.233 [2024-11-05 16:01:29.531085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:57.233 [2024-11-05 16:01:29.531119] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:57.233 [2024-11-05 16:01:29.531131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:31:57.233 [2024-11-05 16:01:29.531137] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:57.233 [2024-11-05 16:01:29.531418] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:57.233 [2024-11-05 16:01:29.531433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:57.233 [2024-11-05 16:01:29.531477] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:31:57.233 [2024-11-05 16:01:29.531490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:57.233 pt3 00:31:57.233 16:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.233 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:31:57.233 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:31:57.233 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:31:57.233 16:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.233 16:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:57.233 [2024-11-05 16:01:29.539071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:31:57.233 [2024-11-05 16:01:29.539100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:57.233 [2024-11-05 16:01:29.539117] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:31:57.233 [2024-11-05 16:01:29.539123] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:57.233 [2024-11-05 16:01:29.539401] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:57.233 [2024-11-05 16:01:29.539414] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:31:57.233 [2024-11-05 16:01:29.539456] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:31:57.233 [2024-11-05 16:01:29.539471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:31:57.233 [2024-11-05 16:01:29.539575] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:31:57.233 [2024-11-05 16:01:29.539585] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:31:57.233 [2024-11-05 16:01:29.539777] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:31:57.233 [2024-11-05 16:01:29.543474] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:31:57.233 [2024-11-05 16:01:29.543496] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:31:57.233 [2024-11-05 16:01:29.543622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:57.233 pt4 00:31:57.233 16:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.233 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:31:57.233 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:31:57.234 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:31:57.234 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:57.234 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:57.234 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:57.234 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:57.234 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:57.234 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:57.234 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:57.234 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:57.234 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:57.234 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:57.234 16:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.234 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:57.234 16:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:57.234 16:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.234 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:57.234 "name": "raid_bdev1", 00:31:57.234 "uuid": "e38452c2-eb42-4285-8251-ab0ba89f71df", 00:31:57.234 "strip_size_kb": 64, 00:31:57.234 "state": "online", 00:31:57.234 "raid_level": "raid5f", 00:31:57.234 "superblock": true, 00:31:57.234 "num_base_bdevs": 4, 00:31:57.234 "num_base_bdevs_discovered": 4, 00:31:57.234 "num_base_bdevs_operational": 4, 00:31:57.234 "base_bdevs_list": [ 00:31:57.234 { 00:31:57.234 "name": "pt1", 00:31:57.234 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:57.234 "is_configured": true, 00:31:57.234 "data_offset": 2048, 00:31:57.234 "data_size": 63488 00:31:57.234 }, 00:31:57.234 { 00:31:57.234 "name": "pt2", 00:31:57.234 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:57.234 "is_configured": true, 00:31:57.234 "data_offset": 2048, 00:31:57.234 "data_size": 63488 00:31:57.234 }, 00:31:57.234 { 00:31:57.234 "name": "pt3", 00:31:57.234 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:57.234 "is_configured": true, 00:31:57.234 "data_offset": 2048, 00:31:57.234 "data_size": 63488 00:31:57.234 }, 00:31:57.234 { 00:31:57.234 "name": "pt4", 00:31:57.234 "uuid": "00000000-0000-0000-0000-000000000004", 00:31:57.234 "is_configured": true, 00:31:57.234 "data_offset": 2048, 00:31:57.234 "data_size": 63488 00:31:57.234 } 00:31:57.234 ] 00:31:57.234 }' 00:31:57.234 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:57.234 16:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:57.492 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:31:57.492 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:31:57.492 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:57.492 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:57.492 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:31:57.493 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:57.493 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:57.493 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:57.493 16:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.493 16:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:57.493 [2024-11-05 16:01:29.872324] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:57.493 16:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.493 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:57.493 "name": "raid_bdev1", 00:31:57.493 "aliases": [ 00:31:57.493 "e38452c2-eb42-4285-8251-ab0ba89f71df" 00:31:57.493 ], 00:31:57.493 "product_name": "Raid Volume", 00:31:57.493 "block_size": 512, 00:31:57.493 "num_blocks": 190464, 00:31:57.493 "uuid": "e38452c2-eb42-4285-8251-ab0ba89f71df", 00:31:57.493 "assigned_rate_limits": { 00:31:57.493 "rw_ios_per_sec": 0, 00:31:57.493 "rw_mbytes_per_sec": 0, 00:31:57.493 "r_mbytes_per_sec": 0, 00:31:57.493 "w_mbytes_per_sec": 0 00:31:57.493 }, 00:31:57.493 "claimed": false, 00:31:57.493 "zoned": false, 00:31:57.493 "supported_io_types": { 00:31:57.493 "read": true, 00:31:57.493 "write": true, 00:31:57.493 "unmap": false, 00:31:57.493 "flush": false, 00:31:57.493 "reset": true, 00:31:57.493 "nvme_admin": false, 00:31:57.493 "nvme_io": false, 00:31:57.493 "nvme_io_md": false, 00:31:57.493 "write_zeroes": true, 00:31:57.493 "zcopy": false, 00:31:57.493 "get_zone_info": false, 00:31:57.493 "zone_management": false, 00:31:57.493 "zone_append": false, 00:31:57.493 "compare": false, 00:31:57.493 "compare_and_write": false, 00:31:57.493 "abort": false, 00:31:57.493 "seek_hole": false, 00:31:57.493 "seek_data": false, 00:31:57.493 "copy": false, 00:31:57.493 "nvme_iov_md": false 00:31:57.493 }, 00:31:57.493 "driver_specific": { 00:31:57.493 "raid": { 00:31:57.493 "uuid": "e38452c2-eb42-4285-8251-ab0ba89f71df", 00:31:57.493 "strip_size_kb": 64, 00:31:57.493 "state": "online", 00:31:57.493 "raid_level": "raid5f", 00:31:57.493 "superblock": true, 00:31:57.493 "num_base_bdevs": 4, 00:31:57.493 "num_base_bdevs_discovered": 4, 00:31:57.493 "num_base_bdevs_operational": 4, 00:31:57.493 "base_bdevs_list": [ 00:31:57.493 { 00:31:57.493 "name": "pt1", 00:31:57.493 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:57.493 "is_configured": true, 00:31:57.493 "data_offset": 2048, 00:31:57.493 "data_size": 63488 00:31:57.493 }, 00:31:57.493 { 00:31:57.493 "name": "pt2", 00:31:57.493 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:57.493 "is_configured": true, 00:31:57.493 "data_offset": 2048, 00:31:57.493 "data_size": 63488 00:31:57.493 }, 00:31:57.493 { 00:31:57.493 "name": "pt3", 00:31:57.493 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:57.493 "is_configured": true, 00:31:57.493 "data_offset": 2048, 00:31:57.493 "data_size": 63488 00:31:57.493 }, 00:31:57.493 { 00:31:57.493 "name": "pt4", 00:31:57.493 "uuid": "00000000-0000-0000-0000-000000000004", 00:31:57.493 "is_configured": true, 00:31:57.493 "data_offset": 2048, 00:31:57.493 "data_size": 63488 00:31:57.493 } 00:31:57.493 ] 00:31:57.493 } 00:31:57.493 } 00:31:57.493 }' 00:31:57.493 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:57.753 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:31:57.753 pt2 00:31:57.753 pt3 00:31:57.753 pt4' 00:31:57.753 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:57.753 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:57.753 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:57.753 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:31:57.753 16:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.753 16:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:57.753 16:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:57.753 16:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:57.753 [2024-11-05 16:01:30.108341] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e38452c2-eb42-4285-8251-ab0ba89f71df '!=' e38452c2-eb42-4285-8251-ab0ba89f71df ']' 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:57.753 [2024-11-05 16:01:30.144216] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:31:57.753 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.754 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:57.754 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:57.754 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:57.754 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:57.754 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:57.754 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:57.754 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:57.754 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:57.754 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:57.754 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:57.754 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:57.754 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.754 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:57.754 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:57.754 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.015 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:58.015 "name": "raid_bdev1", 00:31:58.016 "uuid": "e38452c2-eb42-4285-8251-ab0ba89f71df", 00:31:58.016 "strip_size_kb": 64, 00:31:58.016 "state": "online", 00:31:58.016 "raid_level": "raid5f", 00:31:58.016 "superblock": true, 00:31:58.016 "num_base_bdevs": 4, 00:31:58.016 "num_base_bdevs_discovered": 3, 00:31:58.016 "num_base_bdevs_operational": 3, 00:31:58.016 "base_bdevs_list": [ 00:31:58.016 { 00:31:58.016 "name": null, 00:31:58.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:58.016 "is_configured": false, 00:31:58.016 "data_offset": 0, 00:31:58.016 "data_size": 63488 00:31:58.016 }, 00:31:58.016 { 00:31:58.016 "name": "pt2", 00:31:58.016 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:58.016 "is_configured": true, 00:31:58.016 "data_offset": 2048, 00:31:58.016 "data_size": 63488 00:31:58.016 }, 00:31:58.016 { 00:31:58.016 "name": "pt3", 00:31:58.016 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:58.016 "is_configured": true, 00:31:58.016 "data_offset": 2048, 00:31:58.016 "data_size": 63488 00:31:58.016 }, 00:31:58.016 { 00:31:58.016 "name": "pt4", 00:31:58.016 "uuid": "00000000-0000-0000-0000-000000000004", 00:31:58.016 "is_configured": true, 00:31:58.016 "data_offset": 2048, 00:31:58.016 "data_size": 63488 00:31:58.016 } 00:31:58.016 ] 00:31:58.016 }' 00:31:58.016 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:58.016 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:58.278 [2024-11-05 16:01:30.476245] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:58.278 [2024-11-05 16:01:30.476271] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:58.278 [2024-11-05 16:01:30.476328] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:58.278 [2024-11-05 16:01:30.476390] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:58.278 [2024-11-05 16:01:30.476398] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:58.278 [2024-11-05 16:01:30.548250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:58.278 [2024-11-05 16:01:30.548292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:58.278 [2024-11-05 16:01:30.548306] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:31:58.278 [2024-11-05 16:01:30.548312] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:58.278 [2024-11-05 16:01:30.550142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:58.278 [2024-11-05 16:01:30.550169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:58.278 [2024-11-05 16:01:30.550228] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:58.278 [2024-11-05 16:01:30.550262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:58.278 pt2 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:58.278 "name": "raid_bdev1", 00:31:58.278 "uuid": "e38452c2-eb42-4285-8251-ab0ba89f71df", 00:31:58.278 "strip_size_kb": 64, 00:31:58.278 "state": "configuring", 00:31:58.278 "raid_level": "raid5f", 00:31:58.278 "superblock": true, 00:31:58.278 "num_base_bdevs": 4, 00:31:58.278 "num_base_bdevs_discovered": 1, 00:31:58.278 "num_base_bdevs_operational": 3, 00:31:58.278 "base_bdevs_list": [ 00:31:58.278 { 00:31:58.278 "name": null, 00:31:58.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:58.278 "is_configured": false, 00:31:58.278 "data_offset": 2048, 00:31:58.278 "data_size": 63488 00:31:58.278 }, 00:31:58.278 { 00:31:58.278 "name": "pt2", 00:31:58.278 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:58.278 "is_configured": true, 00:31:58.278 "data_offset": 2048, 00:31:58.278 "data_size": 63488 00:31:58.278 }, 00:31:58.278 { 00:31:58.278 "name": null, 00:31:58.278 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:58.278 "is_configured": false, 00:31:58.278 "data_offset": 2048, 00:31:58.278 "data_size": 63488 00:31:58.278 }, 00:31:58.278 { 00:31:58.278 "name": null, 00:31:58.278 "uuid": "00000000-0000-0000-0000-000000000004", 00:31:58.278 "is_configured": false, 00:31:58.278 "data_offset": 2048, 00:31:58.278 "data_size": 63488 00:31:58.278 } 00:31:58.278 ] 00:31:58.278 }' 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:58.278 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:58.541 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:31:58.541 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:31:58.541 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:58.541 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.541 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:58.541 [2024-11-05 16:01:30.860341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:58.541 [2024-11-05 16:01:30.860390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:58.541 [2024-11-05 16:01:30.860407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:31:58.541 [2024-11-05 16:01:30.860414] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:58.541 [2024-11-05 16:01:30.860764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:58.541 [2024-11-05 16:01:30.860775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:58.541 [2024-11-05 16:01:30.860836] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:31:58.541 [2024-11-05 16:01:30.860867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:58.541 pt3 00:31:58.541 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.541 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:31:58.541 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:58.541 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:58.541 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:58.541 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:58.541 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:58.541 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:58.541 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:58.541 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:58.541 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:58.541 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:58.541 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.541 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:58.541 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:58.541 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.541 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:58.541 "name": "raid_bdev1", 00:31:58.541 "uuid": "e38452c2-eb42-4285-8251-ab0ba89f71df", 00:31:58.541 "strip_size_kb": 64, 00:31:58.541 "state": "configuring", 00:31:58.541 "raid_level": "raid5f", 00:31:58.541 "superblock": true, 00:31:58.541 "num_base_bdevs": 4, 00:31:58.541 "num_base_bdevs_discovered": 2, 00:31:58.541 "num_base_bdevs_operational": 3, 00:31:58.541 "base_bdevs_list": [ 00:31:58.541 { 00:31:58.541 "name": null, 00:31:58.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:58.541 "is_configured": false, 00:31:58.541 "data_offset": 2048, 00:31:58.541 "data_size": 63488 00:31:58.541 }, 00:31:58.541 { 00:31:58.541 "name": "pt2", 00:31:58.541 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:58.541 "is_configured": true, 00:31:58.541 "data_offset": 2048, 00:31:58.541 "data_size": 63488 00:31:58.541 }, 00:31:58.541 { 00:31:58.541 "name": "pt3", 00:31:58.541 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:58.541 "is_configured": true, 00:31:58.541 "data_offset": 2048, 00:31:58.541 "data_size": 63488 00:31:58.541 }, 00:31:58.541 { 00:31:58.541 "name": null, 00:31:58.541 "uuid": "00000000-0000-0000-0000-000000000004", 00:31:58.541 "is_configured": false, 00:31:58.541 "data_offset": 2048, 00:31:58.541 "data_size": 63488 00:31:58.541 } 00:31:58.541 ] 00:31:58.541 }' 00:31:58.541 16:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:58.541 16:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:58.803 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:31:58.803 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:31:58.803 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:31:58.803 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:31:58.803 16:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.803 16:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:58.803 [2024-11-05 16:01:31.196409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:31:58.803 [2024-11-05 16:01:31.196455] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:58.803 [2024-11-05 16:01:31.196470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:31:58.803 [2024-11-05 16:01:31.196477] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:58.803 [2024-11-05 16:01:31.196811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:58.803 [2024-11-05 16:01:31.196829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:31:58.803 [2024-11-05 16:01:31.196897] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:31:58.803 [2024-11-05 16:01:31.196914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:31:58.803 [2024-11-05 16:01:31.197016] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:31:58.803 [2024-11-05 16:01:31.197029] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:31:58.803 [2024-11-05 16:01:31.197229] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:31:58.803 [2024-11-05 16:01:31.200963] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:31:58.803 [2024-11-05 16:01:31.200985] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:31:58.803 [2024-11-05 16:01:31.201197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:58.803 pt4 00:31:58.803 16:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.803 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:58.803 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:58.803 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:58.803 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:58.803 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:58.803 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:58.803 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:58.803 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:58.803 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:58.803 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:58.803 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:58.803 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:58.803 16:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.803 16:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:59.064 16:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.064 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:59.064 "name": "raid_bdev1", 00:31:59.064 "uuid": "e38452c2-eb42-4285-8251-ab0ba89f71df", 00:31:59.064 "strip_size_kb": 64, 00:31:59.064 "state": "online", 00:31:59.064 "raid_level": "raid5f", 00:31:59.064 "superblock": true, 00:31:59.064 "num_base_bdevs": 4, 00:31:59.064 "num_base_bdevs_discovered": 3, 00:31:59.064 "num_base_bdevs_operational": 3, 00:31:59.064 "base_bdevs_list": [ 00:31:59.064 { 00:31:59.064 "name": null, 00:31:59.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:59.064 "is_configured": false, 00:31:59.064 "data_offset": 2048, 00:31:59.064 "data_size": 63488 00:31:59.064 }, 00:31:59.064 { 00:31:59.065 "name": "pt2", 00:31:59.065 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:59.065 "is_configured": true, 00:31:59.065 "data_offset": 2048, 00:31:59.065 "data_size": 63488 00:31:59.065 }, 00:31:59.065 { 00:31:59.065 "name": "pt3", 00:31:59.065 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:59.065 "is_configured": true, 00:31:59.065 "data_offset": 2048, 00:31:59.065 "data_size": 63488 00:31:59.065 }, 00:31:59.065 { 00:31:59.065 "name": "pt4", 00:31:59.065 "uuid": "00000000-0000-0000-0000-000000000004", 00:31:59.065 "is_configured": true, 00:31:59.065 "data_offset": 2048, 00:31:59.065 "data_size": 63488 00:31:59.065 } 00:31:59.065 ] 00:31:59.065 }' 00:31:59.065 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:59.065 16:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:59.327 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:59.327 16:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.327 16:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:59.327 [2024-11-05 16:01:31.513609] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:59.327 [2024-11-05 16:01:31.513634] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:59.327 [2024-11-05 16:01:31.513689] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:59.327 [2024-11-05 16:01:31.513746] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:59.327 [2024-11-05 16:01:31.513755] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:31:59.327 16:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.327 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:31:59.327 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:59.327 16:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.327 16:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:59.327 16:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.327 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:31:59.327 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:31:59.327 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:31:59.327 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:31:59.327 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:31:59.327 16:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.327 16:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:59.327 16:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.327 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:59.327 16:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.327 16:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:59.327 [2024-11-05 16:01:31.565602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:59.327 [2024-11-05 16:01:31.565646] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:59.327 [2024-11-05 16:01:31.565662] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:31:59.327 [2024-11-05 16:01:31.565670] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:59.327 [2024-11-05 16:01:31.567491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:59.327 [2024-11-05 16:01:31.567518] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:59.327 [2024-11-05 16:01:31.567574] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:59.327 [2024-11-05 16:01:31.567610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:59.327 [2024-11-05 16:01:31.567700] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:31:59.327 [2024-11-05 16:01:31.567710] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:59.327 [2024-11-05 16:01:31.567721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:31:59.327 [2024-11-05 16:01:31.567760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:59.327 [2024-11-05 16:01:31.567838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:59.327 pt1 00:31:59.327 16:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.327 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:31:59.327 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:31:59.327 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:59.327 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:59.327 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:59.327 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:59.327 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:59.327 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:59.327 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:59.327 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:59.327 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:59.327 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:59.327 16:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.327 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:59.327 16:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:59.327 16:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.327 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:59.327 "name": "raid_bdev1", 00:31:59.327 "uuid": "e38452c2-eb42-4285-8251-ab0ba89f71df", 00:31:59.327 "strip_size_kb": 64, 00:31:59.327 "state": "configuring", 00:31:59.328 "raid_level": "raid5f", 00:31:59.328 "superblock": true, 00:31:59.328 "num_base_bdevs": 4, 00:31:59.328 "num_base_bdevs_discovered": 2, 00:31:59.328 "num_base_bdevs_operational": 3, 00:31:59.328 "base_bdevs_list": [ 00:31:59.328 { 00:31:59.328 "name": null, 00:31:59.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:59.328 "is_configured": false, 00:31:59.328 "data_offset": 2048, 00:31:59.328 "data_size": 63488 00:31:59.328 }, 00:31:59.328 { 00:31:59.328 "name": "pt2", 00:31:59.328 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:59.328 "is_configured": true, 00:31:59.328 "data_offset": 2048, 00:31:59.328 "data_size": 63488 00:31:59.328 }, 00:31:59.328 { 00:31:59.328 "name": "pt3", 00:31:59.328 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:59.328 "is_configured": true, 00:31:59.328 "data_offset": 2048, 00:31:59.328 "data_size": 63488 00:31:59.328 }, 00:31:59.328 { 00:31:59.328 "name": null, 00:31:59.328 "uuid": "00000000-0000-0000-0000-000000000004", 00:31:59.328 "is_configured": false, 00:31:59.328 "data_offset": 2048, 00:31:59.328 "data_size": 63488 00:31:59.328 } 00:31:59.328 ] 00:31:59.328 }' 00:31:59.328 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:59.328 16:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:59.589 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:31:59.589 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:31:59.589 16:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.589 16:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:59.589 16:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.589 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:31:59.589 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:31:59.589 16:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.589 16:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:59.589 [2024-11-05 16:01:31.909691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:31:59.589 [2024-11-05 16:01:31.909733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:59.589 [2024-11-05 16:01:31.909750] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:31:59.589 [2024-11-05 16:01:31.909758] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:59.589 [2024-11-05 16:01:31.910108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:59.589 [2024-11-05 16:01:31.910119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:31:59.589 [2024-11-05 16:01:31.910177] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:31:59.589 [2024-11-05 16:01:31.910196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:31:59.589 [2024-11-05 16:01:31.910299] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:31:59.589 [2024-11-05 16:01:31.910306] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:31:59.589 [2024-11-05 16:01:31.910503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:31:59.589 [2024-11-05 16:01:31.914395] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:31:59.589 [2024-11-05 16:01:31.914416] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:31:59.589 [2024-11-05 16:01:31.914628] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:59.589 pt4 00:31:59.589 16:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.589 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:59.589 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:59.589 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:59.589 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:59.589 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:59.589 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:59.589 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:59.589 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:59.589 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:59.589 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:59.589 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:59.589 16:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.589 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:59.589 16:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:59.589 16:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.589 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:59.589 "name": "raid_bdev1", 00:31:59.589 "uuid": "e38452c2-eb42-4285-8251-ab0ba89f71df", 00:31:59.589 "strip_size_kb": 64, 00:31:59.589 "state": "online", 00:31:59.589 "raid_level": "raid5f", 00:31:59.589 "superblock": true, 00:31:59.589 "num_base_bdevs": 4, 00:31:59.589 "num_base_bdevs_discovered": 3, 00:31:59.589 "num_base_bdevs_operational": 3, 00:31:59.589 "base_bdevs_list": [ 00:31:59.589 { 00:31:59.589 "name": null, 00:31:59.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:59.589 "is_configured": false, 00:31:59.589 "data_offset": 2048, 00:31:59.589 "data_size": 63488 00:31:59.589 }, 00:31:59.589 { 00:31:59.589 "name": "pt2", 00:31:59.589 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:59.589 "is_configured": true, 00:31:59.589 "data_offset": 2048, 00:31:59.589 "data_size": 63488 00:31:59.589 }, 00:31:59.589 { 00:31:59.589 "name": "pt3", 00:31:59.589 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:59.589 "is_configured": true, 00:31:59.589 "data_offset": 2048, 00:31:59.589 "data_size": 63488 00:31:59.589 }, 00:31:59.589 { 00:31:59.589 "name": "pt4", 00:31:59.589 "uuid": "00000000-0000-0000-0000-000000000004", 00:31:59.589 "is_configured": true, 00:31:59.589 "data_offset": 2048, 00:31:59.589 "data_size": 63488 00:31:59.589 } 00:31:59.589 ] 00:31:59.589 }' 00:31:59.589 16:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:59.589 16:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:59.850 16:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:31:59.850 16:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.850 16:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:59.850 16:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:31:59.850 16:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.850 16:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:31:59.850 16:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:59.850 16:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.850 16:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:59.850 16:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:31:59.850 [2024-11-05 16:01:32.255312] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:59.850 16:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.112 16:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' e38452c2-eb42-4285-8251-ab0ba89f71df '!=' e38452c2-eb42-4285-8251-ab0ba89f71df ']' 00:32:00.112 16:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81416 00:32:00.112 16:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 81416 ']' 00:32:00.112 16:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 81416 00:32:00.112 16:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:32:00.112 16:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:00.112 16:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81416 00:32:00.112 16:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:00.112 16:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:00.112 16:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81416' 00:32:00.112 killing process with pid 81416 00:32:00.112 16:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 81416 00:32:00.112 [2024-11-05 16:01:32.307132] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:00.112 [2024-11-05 16:01:32.307197] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:00.112 [2024-11-05 16:01:32.307254] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:00.112 [2024-11-05 16:01:32.307263] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:32:00.112 16:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 81416 00:32:00.112 [2024-11-05 16:01:32.503331] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:00.680 16:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:32:00.680 00:32:00.680 real 0m5.990s 00:32:00.680 user 0m9.499s 00:32:00.680 sys 0m0.998s 00:32:00.680 16:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:00.680 16:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:00.680 ************************************ 00:32:00.680 END TEST raid5f_superblock_test 00:32:00.680 ************************************ 00:32:00.940 16:01:33 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:32:00.940 16:01:33 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:32:00.940 16:01:33 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:32:00.940 16:01:33 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:00.940 16:01:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:00.940 ************************************ 00:32:00.940 START TEST raid5f_rebuild_test 00:32:00.940 ************************************ 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 false false true 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81874 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81874 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 81874 ']' 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:32:00.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:00.940 16:01:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:00.940 I/O size of 3145728 is greater than zero copy threshold (65536). 00:32:00.941 Zero copy mechanism will not be used. 00:32:00.941 [2024-11-05 16:01:33.177639] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:32:00.941 [2024-11-05 16:01:33.177763] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81874 ] 00:32:00.941 [2024-11-05 16:01:33.333061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:01.199 [2024-11-05 16:01:33.417229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:01.199 [2024-11-05 16:01:33.531176] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:01.199 [2024-11-05 16:01:33.531213] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:01.767 BaseBdev1_malloc 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:01.767 [2024-11-05 16:01:34.052118] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:01.767 [2024-11-05 16:01:34.052173] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:01.767 [2024-11-05 16:01:34.052189] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:32:01.767 [2024-11-05 16:01:34.052198] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:01.767 [2024-11-05 16:01:34.053968] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:01.767 [2024-11-05 16:01:34.053999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:01.767 BaseBdev1 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:01.767 BaseBdev2_malloc 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:01.767 [2024-11-05 16:01:34.088085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:32:01.767 [2024-11-05 16:01:34.088130] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:01.767 [2024-11-05 16:01:34.088142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:32:01.767 [2024-11-05 16:01:34.088151] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:01.767 [2024-11-05 16:01:34.089835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:01.767 [2024-11-05 16:01:34.089876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:01.767 BaseBdev2 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:01.767 BaseBdev3_malloc 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:01.767 [2024-11-05 16:01:34.136185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:32:01.767 [2024-11-05 16:01:34.136230] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:01.767 [2024-11-05 16:01:34.136247] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:32:01.767 [2024-11-05 16:01:34.136256] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:01.767 [2024-11-05 16:01:34.137978] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:01.767 [2024-11-05 16:01:34.138010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:32:01.767 BaseBdev3 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:01.767 BaseBdev4_malloc 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:01.767 [2024-11-05 16:01:34.172312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:32:01.767 [2024-11-05 16:01:34.172355] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:01.767 [2024-11-05 16:01:34.172371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:32:01.767 [2024-11-05 16:01:34.172379] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:01.767 [2024-11-05 16:01:34.174118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:01.767 [2024-11-05 16:01:34.174151] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:32:01.767 BaseBdev4 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:32:01.767 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.768 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.026 spare_malloc 00:32:02.026 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.026 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:32:02.026 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.026 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.026 spare_delay 00:32:02.026 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.026 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:32:02.026 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.026 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.026 [2024-11-05 16:01:34.220514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:02.026 [2024-11-05 16:01:34.220555] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:02.026 [2024-11-05 16:01:34.220569] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:32:02.026 [2024-11-05 16:01:34.220577] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:02.026 [2024-11-05 16:01:34.222290] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:02.026 [2024-11-05 16:01:34.222320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:02.026 spare 00:32:02.026 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.026 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:32:02.026 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.026 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.026 [2024-11-05 16:01:34.228564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:02.026 [2024-11-05 16:01:34.230061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:02.026 [2024-11-05 16:01:34.230112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:02.026 [2024-11-05 16:01:34.230152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:02.026 [2024-11-05 16:01:34.230218] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:32:02.026 [2024-11-05 16:01:34.230234] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:32:02.027 [2024-11-05 16:01:34.230436] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:32:02.027 [2024-11-05 16:01:34.234377] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:32:02.027 [2024-11-05 16:01:34.234394] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:32:02.027 [2024-11-05 16:01:34.234530] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:02.027 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.027 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:32:02.027 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:02.027 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:02.027 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:02.027 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:02.027 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:02.027 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:02.027 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:02.027 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:02.027 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:02.027 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:02.027 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:02.027 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.027 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.027 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.027 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:02.027 "name": "raid_bdev1", 00:32:02.027 "uuid": "755d189a-9118-46b5-aad8-c78411aacfc1", 00:32:02.027 "strip_size_kb": 64, 00:32:02.027 "state": "online", 00:32:02.027 "raid_level": "raid5f", 00:32:02.027 "superblock": false, 00:32:02.027 "num_base_bdevs": 4, 00:32:02.027 "num_base_bdevs_discovered": 4, 00:32:02.027 "num_base_bdevs_operational": 4, 00:32:02.027 "base_bdevs_list": [ 00:32:02.027 { 00:32:02.027 "name": "BaseBdev1", 00:32:02.027 "uuid": "2adeba1e-0240-5462-8fd1-c8184d3f1faa", 00:32:02.027 "is_configured": true, 00:32:02.027 "data_offset": 0, 00:32:02.027 "data_size": 65536 00:32:02.027 }, 00:32:02.027 { 00:32:02.027 "name": "BaseBdev2", 00:32:02.027 "uuid": "c37df8f9-edaf-5ae8-8222-42dabb3e5a69", 00:32:02.027 "is_configured": true, 00:32:02.027 "data_offset": 0, 00:32:02.027 "data_size": 65536 00:32:02.027 }, 00:32:02.027 { 00:32:02.027 "name": "BaseBdev3", 00:32:02.027 "uuid": "d043f65d-468a-57ea-b767-e3576fc2e42f", 00:32:02.027 "is_configured": true, 00:32:02.027 "data_offset": 0, 00:32:02.027 "data_size": 65536 00:32:02.027 }, 00:32:02.027 { 00:32:02.027 "name": "BaseBdev4", 00:32:02.027 "uuid": "0522ecc2-df08-547a-8b76-257b822ce4a9", 00:32:02.027 "is_configured": true, 00:32:02.027 "data_offset": 0, 00:32:02.027 "data_size": 65536 00:32:02.027 } 00:32:02.027 ] 00:32:02.027 }' 00:32:02.027 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:02.027 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.285 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:02.285 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:32:02.285 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.285 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.285 [2024-11-05 16:01:34.551143] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:02.285 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.285 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:32:02.285 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:02.285 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.285 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.285 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:32:02.285 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.285 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:32:02.285 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:32:02.285 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:32:02.285 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:32:02.285 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:32:02.286 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:32:02.286 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:32:02.286 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:02.286 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:32:02.286 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:02.286 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:32:02.286 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:02.286 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:02.286 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:32:02.544 [2024-11-05 16:01:34.791039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:32:02.544 /dev/nbd0 00:32:02.544 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:02.544 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:02.544 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:32:02.544 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:32:02.544 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:32:02.544 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:32:02.544 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:32:02.544 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:32:02.544 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:32:02.544 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:32:02.545 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:02.545 1+0 records in 00:32:02.545 1+0 records out 00:32:02.545 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237806 s, 17.2 MB/s 00:32:02.545 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:02.545 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:32:02.545 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:02.545 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:32:02.545 16:01:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:32:02.545 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:02.545 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:02.545 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:32:02.545 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:32:02.545 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:32:02.545 16:01:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:32:03.124 512+0 records in 00:32:03.124 512+0 records out 00:32:03.124 100663296 bytes (101 MB, 96 MiB) copied, 0.501454 s, 201 MB/s 00:32:03.124 16:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:32:03.124 16:01:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:32:03.124 16:01:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:03.124 16:01:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:03.124 16:01:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:32:03.124 16:01:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:03.124 16:01:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:32:03.386 [2024-11-05 16:01:35.546495] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:03.386 16:01:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:03.386 16:01:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:03.386 16:01:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:03.386 16:01:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:03.386 16:01:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:03.386 16:01:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:03.386 16:01:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:32:03.386 16:01:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:32:03.386 16:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:32:03.386 16:01:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.386 16:01:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:03.386 [2024-11-05 16:01:35.583082] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:03.386 16:01:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.386 16:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:03.386 16:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:03.386 16:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:03.386 16:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:03.386 16:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:03.386 16:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:03.386 16:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:03.386 16:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:03.386 16:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:03.386 16:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:03.386 16:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:03.386 16:01:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.386 16:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:03.386 16:01:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:03.386 16:01:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.386 16:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:03.386 "name": "raid_bdev1", 00:32:03.386 "uuid": "755d189a-9118-46b5-aad8-c78411aacfc1", 00:32:03.386 "strip_size_kb": 64, 00:32:03.386 "state": "online", 00:32:03.386 "raid_level": "raid5f", 00:32:03.386 "superblock": false, 00:32:03.386 "num_base_bdevs": 4, 00:32:03.386 "num_base_bdevs_discovered": 3, 00:32:03.386 "num_base_bdevs_operational": 3, 00:32:03.386 "base_bdevs_list": [ 00:32:03.386 { 00:32:03.386 "name": null, 00:32:03.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:03.386 "is_configured": false, 00:32:03.386 "data_offset": 0, 00:32:03.386 "data_size": 65536 00:32:03.386 }, 00:32:03.386 { 00:32:03.386 "name": "BaseBdev2", 00:32:03.386 "uuid": "c37df8f9-edaf-5ae8-8222-42dabb3e5a69", 00:32:03.386 "is_configured": true, 00:32:03.386 "data_offset": 0, 00:32:03.386 "data_size": 65536 00:32:03.386 }, 00:32:03.386 { 00:32:03.386 "name": "BaseBdev3", 00:32:03.386 "uuid": "d043f65d-468a-57ea-b767-e3576fc2e42f", 00:32:03.386 "is_configured": true, 00:32:03.386 "data_offset": 0, 00:32:03.386 "data_size": 65536 00:32:03.386 }, 00:32:03.386 { 00:32:03.386 "name": "BaseBdev4", 00:32:03.386 "uuid": "0522ecc2-df08-547a-8b76-257b822ce4a9", 00:32:03.386 "is_configured": true, 00:32:03.386 "data_offset": 0, 00:32:03.386 "data_size": 65536 00:32:03.386 } 00:32:03.386 ] 00:32:03.386 }' 00:32:03.386 16:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:03.386 16:01:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:03.648 16:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:32:03.648 16:01:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.648 16:01:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:03.648 [2024-11-05 16:01:35.919140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:03.648 [2024-11-05 16:01:35.927369] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:32:03.648 16:01:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.648 16:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:32:03.648 [2024-11-05 16:01:35.932695] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:04.593 16:01:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:04.593 16:01:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:04.593 16:01:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:04.593 16:01:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:04.593 16:01:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:04.593 16:01:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:04.593 16:01:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.593 16:01:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:04.593 16:01:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:04.593 16:01:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.593 16:01:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:04.593 "name": "raid_bdev1", 00:32:04.593 "uuid": "755d189a-9118-46b5-aad8-c78411aacfc1", 00:32:04.593 "strip_size_kb": 64, 00:32:04.593 "state": "online", 00:32:04.593 "raid_level": "raid5f", 00:32:04.593 "superblock": false, 00:32:04.593 "num_base_bdevs": 4, 00:32:04.593 "num_base_bdevs_discovered": 4, 00:32:04.593 "num_base_bdevs_operational": 4, 00:32:04.593 "process": { 00:32:04.593 "type": "rebuild", 00:32:04.593 "target": "spare", 00:32:04.593 "progress": { 00:32:04.593 "blocks": 19200, 00:32:04.593 "percent": 9 00:32:04.593 } 00:32:04.593 }, 00:32:04.593 "base_bdevs_list": [ 00:32:04.593 { 00:32:04.593 "name": "spare", 00:32:04.593 "uuid": "481d253e-46fa-55e5-8fe6-41353ae773b7", 00:32:04.593 "is_configured": true, 00:32:04.593 "data_offset": 0, 00:32:04.593 "data_size": 65536 00:32:04.593 }, 00:32:04.593 { 00:32:04.593 "name": "BaseBdev2", 00:32:04.593 "uuid": "c37df8f9-edaf-5ae8-8222-42dabb3e5a69", 00:32:04.593 "is_configured": true, 00:32:04.593 "data_offset": 0, 00:32:04.593 "data_size": 65536 00:32:04.593 }, 00:32:04.593 { 00:32:04.593 "name": "BaseBdev3", 00:32:04.593 "uuid": "d043f65d-468a-57ea-b767-e3576fc2e42f", 00:32:04.593 "is_configured": true, 00:32:04.593 "data_offset": 0, 00:32:04.593 "data_size": 65536 00:32:04.593 }, 00:32:04.593 { 00:32:04.594 "name": "BaseBdev4", 00:32:04.594 "uuid": "0522ecc2-df08-547a-8b76-257b822ce4a9", 00:32:04.594 "is_configured": true, 00:32:04.594 "data_offset": 0, 00:32:04.594 "data_size": 65536 00:32:04.594 } 00:32:04.594 ] 00:32:04.594 }' 00:32:04.594 16:01:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:04.594 16:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:04.594 16:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:04.855 16:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:04.856 16:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:32:04.856 16:01:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.856 16:01:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:04.856 [2024-11-05 16:01:37.041343] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:04.856 [2024-11-05 16:01:37.140431] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:04.856 [2024-11-05 16:01:37.140494] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:04.856 [2024-11-05 16:01:37.140509] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:04.856 [2024-11-05 16:01:37.140517] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:04.856 16:01:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.856 16:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:04.856 16:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:04.856 16:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:04.856 16:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:04.856 16:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:04.856 16:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:04.856 16:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:04.856 16:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:04.856 16:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:04.856 16:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:04.856 16:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:04.856 16:01:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.856 16:01:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:04.856 16:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:04.856 16:01:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.856 16:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:04.856 "name": "raid_bdev1", 00:32:04.856 "uuid": "755d189a-9118-46b5-aad8-c78411aacfc1", 00:32:04.856 "strip_size_kb": 64, 00:32:04.856 "state": "online", 00:32:04.856 "raid_level": "raid5f", 00:32:04.856 "superblock": false, 00:32:04.856 "num_base_bdevs": 4, 00:32:04.856 "num_base_bdevs_discovered": 3, 00:32:04.856 "num_base_bdevs_operational": 3, 00:32:04.856 "base_bdevs_list": [ 00:32:04.856 { 00:32:04.856 "name": null, 00:32:04.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:04.856 "is_configured": false, 00:32:04.856 "data_offset": 0, 00:32:04.856 "data_size": 65536 00:32:04.856 }, 00:32:04.856 { 00:32:04.856 "name": "BaseBdev2", 00:32:04.856 "uuid": "c37df8f9-edaf-5ae8-8222-42dabb3e5a69", 00:32:04.856 "is_configured": true, 00:32:04.856 "data_offset": 0, 00:32:04.856 "data_size": 65536 00:32:04.856 }, 00:32:04.856 { 00:32:04.856 "name": "BaseBdev3", 00:32:04.856 "uuid": "d043f65d-468a-57ea-b767-e3576fc2e42f", 00:32:04.856 "is_configured": true, 00:32:04.856 "data_offset": 0, 00:32:04.856 "data_size": 65536 00:32:04.856 }, 00:32:04.856 { 00:32:04.856 "name": "BaseBdev4", 00:32:04.856 "uuid": "0522ecc2-df08-547a-8b76-257b822ce4a9", 00:32:04.856 "is_configured": true, 00:32:04.856 "data_offset": 0, 00:32:04.856 "data_size": 65536 00:32:04.856 } 00:32:04.856 ] 00:32:04.856 }' 00:32:04.856 16:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:04.856 16:01:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:05.118 16:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:05.118 16:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:05.118 16:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:05.118 16:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:05.118 16:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:05.118 16:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:05.118 16:01:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.118 16:01:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:05.118 16:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:05.118 16:01:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.118 16:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:05.118 "name": "raid_bdev1", 00:32:05.118 "uuid": "755d189a-9118-46b5-aad8-c78411aacfc1", 00:32:05.118 "strip_size_kb": 64, 00:32:05.118 "state": "online", 00:32:05.118 "raid_level": "raid5f", 00:32:05.118 "superblock": false, 00:32:05.118 "num_base_bdevs": 4, 00:32:05.118 "num_base_bdevs_discovered": 3, 00:32:05.118 "num_base_bdevs_operational": 3, 00:32:05.118 "base_bdevs_list": [ 00:32:05.118 { 00:32:05.118 "name": null, 00:32:05.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:05.118 "is_configured": false, 00:32:05.118 "data_offset": 0, 00:32:05.119 "data_size": 65536 00:32:05.119 }, 00:32:05.119 { 00:32:05.119 "name": "BaseBdev2", 00:32:05.119 "uuid": "c37df8f9-edaf-5ae8-8222-42dabb3e5a69", 00:32:05.119 "is_configured": true, 00:32:05.119 "data_offset": 0, 00:32:05.119 "data_size": 65536 00:32:05.119 }, 00:32:05.119 { 00:32:05.119 "name": "BaseBdev3", 00:32:05.119 "uuid": "d043f65d-468a-57ea-b767-e3576fc2e42f", 00:32:05.119 "is_configured": true, 00:32:05.119 "data_offset": 0, 00:32:05.119 "data_size": 65536 00:32:05.119 }, 00:32:05.119 { 00:32:05.119 "name": "BaseBdev4", 00:32:05.119 "uuid": "0522ecc2-df08-547a-8b76-257b822ce4a9", 00:32:05.119 "is_configured": true, 00:32:05.119 "data_offset": 0, 00:32:05.119 "data_size": 65536 00:32:05.119 } 00:32:05.119 ] 00:32:05.119 }' 00:32:05.119 16:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:05.380 16:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:05.380 16:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:05.380 16:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:05.380 16:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:32:05.380 16:01:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.380 16:01:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:05.380 [2024-11-05 16:01:37.577406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:05.380 [2024-11-05 16:01:37.585119] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:32:05.380 16:01:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.380 16:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:32:05.380 [2024-11-05 16:01:37.590407] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:06.325 16:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:06.325 16:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:06.325 16:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:06.325 16:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:06.325 16:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:06.325 16:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:06.325 16:01:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.325 16:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:06.325 16:01:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:06.325 16:01:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.325 16:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:06.325 "name": "raid_bdev1", 00:32:06.325 "uuid": "755d189a-9118-46b5-aad8-c78411aacfc1", 00:32:06.325 "strip_size_kb": 64, 00:32:06.325 "state": "online", 00:32:06.325 "raid_level": "raid5f", 00:32:06.325 "superblock": false, 00:32:06.325 "num_base_bdevs": 4, 00:32:06.325 "num_base_bdevs_discovered": 4, 00:32:06.325 "num_base_bdevs_operational": 4, 00:32:06.325 "process": { 00:32:06.325 "type": "rebuild", 00:32:06.325 "target": "spare", 00:32:06.325 "progress": { 00:32:06.325 "blocks": 19200, 00:32:06.325 "percent": 9 00:32:06.325 } 00:32:06.325 }, 00:32:06.325 "base_bdevs_list": [ 00:32:06.325 { 00:32:06.325 "name": "spare", 00:32:06.325 "uuid": "481d253e-46fa-55e5-8fe6-41353ae773b7", 00:32:06.325 "is_configured": true, 00:32:06.325 "data_offset": 0, 00:32:06.325 "data_size": 65536 00:32:06.325 }, 00:32:06.325 { 00:32:06.325 "name": "BaseBdev2", 00:32:06.325 "uuid": "c37df8f9-edaf-5ae8-8222-42dabb3e5a69", 00:32:06.325 "is_configured": true, 00:32:06.325 "data_offset": 0, 00:32:06.325 "data_size": 65536 00:32:06.325 }, 00:32:06.325 { 00:32:06.325 "name": "BaseBdev3", 00:32:06.325 "uuid": "d043f65d-468a-57ea-b767-e3576fc2e42f", 00:32:06.325 "is_configured": true, 00:32:06.325 "data_offset": 0, 00:32:06.325 "data_size": 65536 00:32:06.325 }, 00:32:06.325 { 00:32:06.325 "name": "BaseBdev4", 00:32:06.325 "uuid": "0522ecc2-df08-547a-8b76-257b822ce4a9", 00:32:06.325 "is_configured": true, 00:32:06.325 "data_offset": 0, 00:32:06.325 "data_size": 65536 00:32:06.325 } 00:32:06.325 ] 00:32:06.325 }' 00:32:06.325 16:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:06.325 16:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:06.325 16:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:06.325 16:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:06.325 16:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:32:06.325 16:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:32:06.325 16:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:32:06.325 16:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=474 00:32:06.325 16:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:06.325 16:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:06.325 16:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:06.325 16:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:06.325 16:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:06.325 16:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:06.325 16:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:06.325 16:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:06.325 16:01:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.325 16:01:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:06.325 16:01:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.325 16:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:06.325 "name": "raid_bdev1", 00:32:06.326 "uuid": "755d189a-9118-46b5-aad8-c78411aacfc1", 00:32:06.326 "strip_size_kb": 64, 00:32:06.326 "state": "online", 00:32:06.326 "raid_level": "raid5f", 00:32:06.326 "superblock": false, 00:32:06.326 "num_base_bdevs": 4, 00:32:06.326 "num_base_bdevs_discovered": 4, 00:32:06.326 "num_base_bdevs_operational": 4, 00:32:06.326 "process": { 00:32:06.326 "type": "rebuild", 00:32:06.326 "target": "spare", 00:32:06.326 "progress": { 00:32:06.326 "blocks": 21120, 00:32:06.326 "percent": 10 00:32:06.326 } 00:32:06.326 }, 00:32:06.326 "base_bdevs_list": [ 00:32:06.326 { 00:32:06.326 "name": "spare", 00:32:06.326 "uuid": "481d253e-46fa-55e5-8fe6-41353ae773b7", 00:32:06.326 "is_configured": true, 00:32:06.326 "data_offset": 0, 00:32:06.326 "data_size": 65536 00:32:06.326 }, 00:32:06.326 { 00:32:06.326 "name": "BaseBdev2", 00:32:06.326 "uuid": "c37df8f9-edaf-5ae8-8222-42dabb3e5a69", 00:32:06.326 "is_configured": true, 00:32:06.326 "data_offset": 0, 00:32:06.326 "data_size": 65536 00:32:06.326 }, 00:32:06.326 { 00:32:06.326 "name": "BaseBdev3", 00:32:06.326 "uuid": "d043f65d-468a-57ea-b767-e3576fc2e42f", 00:32:06.326 "is_configured": true, 00:32:06.326 "data_offset": 0, 00:32:06.326 "data_size": 65536 00:32:06.326 }, 00:32:06.326 { 00:32:06.326 "name": "BaseBdev4", 00:32:06.326 "uuid": "0522ecc2-df08-547a-8b76-257b822ce4a9", 00:32:06.326 "is_configured": true, 00:32:06.326 "data_offset": 0, 00:32:06.326 "data_size": 65536 00:32:06.326 } 00:32:06.326 ] 00:32:06.326 }' 00:32:06.326 16:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:06.632 16:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:06.632 16:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:06.632 16:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:06.632 16:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:07.566 16:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:07.566 16:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:07.566 16:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:07.566 16:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:07.566 16:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:07.566 16:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:07.566 16:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:07.566 16:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.566 16:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:07.566 16:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:07.566 16:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.566 16:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:07.566 "name": "raid_bdev1", 00:32:07.566 "uuid": "755d189a-9118-46b5-aad8-c78411aacfc1", 00:32:07.566 "strip_size_kb": 64, 00:32:07.566 "state": "online", 00:32:07.566 "raid_level": "raid5f", 00:32:07.566 "superblock": false, 00:32:07.566 "num_base_bdevs": 4, 00:32:07.566 "num_base_bdevs_discovered": 4, 00:32:07.566 "num_base_bdevs_operational": 4, 00:32:07.566 "process": { 00:32:07.566 "type": "rebuild", 00:32:07.566 "target": "spare", 00:32:07.566 "progress": { 00:32:07.566 "blocks": 40320, 00:32:07.566 "percent": 20 00:32:07.566 } 00:32:07.566 }, 00:32:07.566 "base_bdevs_list": [ 00:32:07.566 { 00:32:07.566 "name": "spare", 00:32:07.566 "uuid": "481d253e-46fa-55e5-8fe6-41353ae773b7", 00:32:07.566 "is_configured": true, 00:32:07.566 "data_offset": 0, 00:32:07.566 "data_size": 65536 00:32:07.566 }, 00:32:07.566 { 00:32:07.566 "name": "BaseBdev2", 00:32:07.566 "uuid": "c37df8f9-edaf-5ae8-8222-42dabb3e5a69", 00:32:07.566 "is_configured": true, 00:32:07.566 "data_offset": 0, 00:32:07.566 "data_size": 65536 00:32:07.566 }, 00:32:07.566 { 00:32:07.566 "name": "BaseBdev3", 00:32:07.566 "uuid": "d043f65d-468a-57ea-b767-e3576fc2e42f", 00:32:07.566 "is_configured": true, 00:32:07.566 "data_offset": 0, 00:32:07.566 "data_size": 65536 00:32:07.566 }, 00:32:07.566 { 00:32:07.566 "name": "BaseBdev4", 00:32:07.566 "uuid": "0522ecc2-df08-547a-8b76-257b822ce4a9", 00:32:07.566 "is_configured": true, 00:32:07.566 "data_offset": 0, 00:32:07.566 "data_size": 65536 00:32:07.566 } 00:32:07.566 ] 00:32:07.566 }' 00:32:07.566 16:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:07.566 16:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:07.566 16:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:07.566 16:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:07.566 16:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:08.500 16:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:08.500 16:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:08.500 16:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:08.500 16:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:08.500 16:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:08.500 16:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:08.500 16:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:08.500 16:01:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.500 16:01:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:08.500 16:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:08.500 16:01:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.758 16:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:08.758 "name": "raid_bdev1", 00:32:08.758 "uuid": "755d189a-9118-46b5-aad8-c78411aacfc1", 00:32:08.758 "strip_size_kb": 64, 00:32:08.758 "state": "online", 00:32:08.758 "raid_level": "raid5f", 00:32:08.758 "superblock": false, 00:32:08.758 "num_base_bdevs": 4, 00:32:08.758 "num_base_bdevs_discovered": 4, 00:32:08.758 "num_base_bdevs_operational": 4, 00:32:08.758 "process": { 00:32:08.758 "type": "rebuild", 00:32:08.758 "target": "spare", 00:32:08.758 "progress": { 00:32:08.758 "blocks": 61440, 00:32:08.758 "percent": 31 00:32:08.758 } 00:32:08.758 }, 00:32:08.758 "base_bdevs_list": [ 00:32:08.758 { 00:32:08.758 "name": "spare", 00:32:08.758 "uuid": "481d253e-46fa-55e5-8fe6-41353ae773b7", 00:32:08.758 "is_configured": true, 00:32:08.758 "data_offset": 0, 00:32:08.758 "data_size": 65536 00:32:08.758 }, 00:32:08.758 { 00:32:08.758 "name": "BaseBdev2", 00:32:08.758 "uuid": "c37df8f9-edaf-5ae8-8222-42dabb3e5a69", 00:32:08.758 "is_configured": true, 00:32:08.758 "data_offset": 0, 00:32:08.758 "data_size": 65536 00:32:08.758 }, 00:32:08.758 { 00:32:08.758 "name": "BaseBdev3", 00:32:08.758 "uuid": "d043f65d-468a-57ea-b767-e3576fc2e42f", 00:32:08.758 "is_configured": true, 00:32:08.758 "data_offset": 0, 00:32:08.758 "data_size": 65536 00:32:08.758 }, 00:32:08.758 { 00:32:08.758 "name": "BaseBdev4", 00:32:08.758 "uuid": "0522ecc2-df08-547a-8b76-257b822ce4a9", 00:32:08.758 "is_configured": true, 00:32:08.758 "data_offset": 0, 00:32:08.758 "data_size": 65536 00:32:08.758 } 00:32:08.758 ] 00:32:08.758 }' 00:32:08.758 16:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:08.758 16:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:08.758 16:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:08.758 16:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:08.758 16:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:09.700 16:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:09.700 16:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:09.700 16:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:09.700 16:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:09.700 16:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:09.700 16:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:09.700 16:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:09.700 16:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:09.700 16:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.700 16:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:09.700 16:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.700 16:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:09.700 "name": "raid_bdev1", 00:32:09.700 "uuid": "755d189a-9118-46b5-aad8-c78411aacfc1", 00:32:09.700 "strip_size_kb": 64, 00:32:09.700 "state": "online", 00:32:09.700 "raid_level": "raid5f", 00:32:09.700 "superblock": false, 00:32:09.700 "num_base_bdevs": 4, 00:32:09.700 "num_base_bdevs_discovered": 4, 00:32:09.700 "num_base_bdevs_operational": 4, 00:32:09.700 "process": { 00:32:09.700 "type": "rebuild", 00:32:09.700 "target": "spare", 00:32:09.700 "progress": { 00:32:09.700 "blocks": 82560, 00:32:09.700 "percent": 41 00:32:09.700 } 00:32:09.700 }, 00:32:09.700 "base_bdevs_list": [ 00:32:09.700 { 00:32:09.700 "name": "spare", 00:32:09.700 "uuid": "481d253e-46fa-55e5-8fe6-41353ae773b7", 00:32:09.700 "is_configured": true, 00:32:09.700 "data_offset": 0, 00:32:09.700 "data_size": 65536 00:32:09.700 }, 00:32:09.700 { 00:32:09.700 "name": "BaseBdev2", 00:32:09.700 "uuid": "c37df8f9-edaf-5ae8-8222-42dabb3e5a69", 00:32:09.700 "is_configured": true, 00:32:09.700 "data_offset": 0, 00:32:09.700 "data_size": 65536 00:32:09.700 }, 00:32:09.700 { 00:32:09.700 "name": "BaseBdev3", 00:32:09.700 "uuid": "d043f65d-468a-57ea-b767-e3576fc2e42f", 00:32:09.700 "is_configured": true, 00:32:09.700 "data_offset": 0, 00:32:09.700 "data_size": 65536 00:32:09.700 }, 00:32:09.700 { 00:32:09.700 "name": "BaseBdev4", 00:32:09.700 "uuid": "0522ecc2-df08-547a-8b76-257b822ce4a9", 00:32:09.700 "is_configured": true, 00:32:09.700 "data_offset": 0, 00:32:09.700 "data_size": 65536 00:32:09.700 } 00:32:09.700 ] 00:32:09.700 }' 00:32:09.700 16:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:09.700 16:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:09.700 16:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:09.700 16:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:09.700 16:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:11.123 16:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:11.124 16:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:11.124 16:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:11.124 16:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:11.124 16:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:11.124 16:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:11.124 16:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:11.124 16:01:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.124 16:01:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:11.124 16:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:11.124 16:01:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.124 16:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:11.124 "name": "raid_bdev1", 00:32:11.124 "uuid": "755d189a-9118-46b5-aad8-c78411aacfc1", 00:32:11.124 "strip_size_kb": 64, 00:32:11.124 "state": "online", 00:32:11.124 "raid_level": "raid5f", 00:32:11.124 "superblock": false, 00:32:11.124 "num_base_bdevs": 4, 00:32:11.124 "num_base_bdevs_discovered": 4, 00:32:11.124 "num_base_bdevs_operational": 4, 00:32:11.124 "process": { 00:32:11.124 "type": "rebuild", 00:32:11.124 "target": "spare", 00:32:11.124 "progress": { 00:32:11.124 "blocks": 103680, 00:32:11.124 "percent": 52 00:32:11.124 } 00:32:11.124 }, 00:32:11.124 "base_bdevs_list": [ 00:32:11.124 { 00:32:11.124 "name": "spare", 00:32:11.124 "uuid": "481d253e-46fa-55e5-8fe6-41353ae773b7", 00:32:11.124 "is_configured": true, 00:32:11.124 "data_offset": 0, 00:32:11.124 "data_size": 65536 00:32:11.124 }, 00:32:11.124 { 00:32:11.124 "name": "BaseBdev2", 00:32:11.124 "uuid": "c37df8f9-edaf-5ae8-8222-42dabb3e5a69", 00:32:11.124 "is_configured": true, 00:32:11.124 "data_offset": 0, 00:32:11.124 "data_size": 65536 00:32:11.124 }, 00:32:11.124 { 00:32:11.124 "name": "BaseBdev3", 00:32:11.124 "uuid": "d043f65d-468a-57ea-b767-e3576fc2e42f", 00:32:11.124 "is_configured": true, 00:32:11.124 "data_offset": 0, 00:32:11.124 "data_size": 65536 00:32:11.124 }, 00:32:11.124 { 00:32:11.124 "name": "BaseBdev4", 00:32:11.124 "uuid": "0522ecc2-df08-547a-8b76-257b822ce4a9", 00:32:11.124 "is_configured": true, 00:32:11.124 "data_offset": 0, 00:32:11.124 "data_size": 65536 00:32:11.124 } 00:32:11.124 ] 00:32:11.124 }' 00:32:11.124 16:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:11.124 16:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:11.124 16:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:11.124 16:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:11.124 16:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:12.067 16:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:12.067 16:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:12.067 16:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:12.067 16:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:12.067 16:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:12.067 16:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:12.067 16:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:12.067 16:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:12.067 16:01:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.067 16:01:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:12.067 16:01:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.067 16:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:12.067 "name": "raid_bdev1", 00:32:12.067 "uuid": "755d189a-9118-46b5-aad8-c78411aacfc1", 00:32:12.067 "strip_size_kb": 64, 00:32:12.067 "state": "online", 00:32:12.067 "raid_level": "raid5f", 00:32:12.067 "superblock": false, 00:32:12.067 "num_base_bdevs": 4, 00:32:12.067 "num_base_bdevs_discovered": 4, 00:32:12.067 "num_base_bdevs_operational": 4, 00:32:12.067 "process": { 00:32:12.067 "type": "rebuild", 00:32:12.067 "target": "spare", 00:32:12.067 "progress": { 00:32:12.067 "blocks": 124800, 00:32:12.067 "percent": 63 00:32:12.067 } 00:32:12.067 }, 00:32:12.067 "base_bdevs_list": [ 00:32:12.067 { 00:32:12.067 "name": "spare", 00:32:12.067 "uuid": "481d253e-46fa-55e5-8fe6-41353ae773b7", 00:32:12.067 "is_configured": true, 00:32:12.067 "data_offset": 0, 00:32:12.067 "data_size": 65536 00:32:12.067 }, 00:32:12.067 { 00:32:12.067 "name": "BaseBdev2", 00:32:12.067 "uuid": "c37df8f9-edaf-5ae8-8222-42dabb3e5a69", 00:32:12.067 "is_configured": true, 00:32:12.067 "data_offset": 0, 00:32:12.067 "data_size": 65536 00:32:12.067 }, 00:32:12.067 { 00:32:12.067 "name": "BaseBdev3", 00:32:12.067 "uuid": "d043f65d-468a-57ea-b767-e3576fc2e42f", 00:32:12.067 "is_configured": true, 00:32:12.067 "data_offset": 0, 00:32:12.067 "data_size": 65536 00:32:12.067 }, 00:32:12.067 { 00:32:12.067 "name": "BaseBdev4", 00:32:12.067 "uuid": "0522ecc2-df08-547a-8b76-257b822ce4a9", 00:32:12.067 "is_configured": true, 00:32:12.067 "data_offset": 0, 00:32:12.067 "data_size": 65536 00:32:12.067 } 00:32:12.067 ] 00:32:12.067 }' 00:32:12.067 16:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:12.067 16:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:12.067 16:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:12.067 16:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:12.067 16:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:13.011 16:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:13.011 16:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:13.011 16:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:13.011 16:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:13.011 16:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:13.011 16:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:13.011 16:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:13.011 16:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.011 16:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:13.011 16:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:13.011 16:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.011 16:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:13.011 "name": "raid_bdev1", 00:32:13.011 "uuid": "755d189a-9118-46b5-aad8-c78411aacfc1", 00:32:13.011 "strip_size_kb": 64, 00:32:13.011 "state": "online", 00:32:13.011 "raid_level": "raid5f", 00:32:13.011 "superblock": false, 00:32:13.011 "num_base_bdevs": 4, 00:32:13.011 "num_base_bdevs_discovered": 4, 00:32:13.011 "num_base_bdevs_operational": 4, 00:32:13.011 "process": { 00:32:13.011 "type": "rebuild", 00:32:13.011 "target": "spare", 00:32:13.011 "progress": { 00:32:13.011 "blocks": 145920, 00:32:13.011 "percent": 74 00:32:13.011 } 00:32:13.011 }, 00:32:13.011 "base_bdevs_list": [ 00:32:13.011 { 00:32:13.011 "name": "spare", 00:32:13.011 "uuid": "481d253e-46fa-55e5-8fe6-41353ae773b7", 00:32:13.011 "is_configured": true, 00:32:13.011 "data_offset": 0, 00:32:13.011 "data_size": 65536 00:32:13.011 }, 00:32:13.011 { 00:32:13.011 "name": "BaseBdev2", 00:32:13.011 "uuid": "c37df8f9-edaf-5ae8-8222-42dabb3e5a69", 00:32:13.011 "is_configured": true, 00:32:13.011 "data_offset": 0, 00:32:13.011 "data_size": 65536 00:32:13.011 }, 00:32:13.011 { 00:32:13.011 "name": "BaseBdev3", 00:32:13.011 "uuid": "d043f65d-468a-57ea-b767-e3576fc2e42f", 00:32:13.011 "is_configured": true, 00:32:13.011 "data_offset": 0, 00:32:13.011 "data_size": 65536 00:32:13.011 }, 00:32:13.011 { 00:32:13.011 "name": "BaseBdev4", 00:32:13.011 "uuid": "0522ecc2-df08-547a-8b76-257b822ce4a9", 00:32:13.011 "is_configured": true, 00:32:13.011 "data_offset": 0, 00:32:13.011 "data_size": 65536 00:32:13.011 } 00:32:13.011 ] 00:32:13.011 }' 00:32:13.011 16:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:13.011 16:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:13.011 16:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:13.011 16:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:13.011 16:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:14.397 16:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:14.397 16:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:14.397 16:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:14.397 16:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:14.397 16:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:14.397 16:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:14.397 16:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:14.397 16:01:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.397 16:01:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:14.397 16:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:14.397 16:01:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.397 16:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:14.397 "name": "raid_bdev1", 00:32:14.397 "uuid": "755d189a-9118-46b5-aad8-c78411aacfc1", 00:32:14.397 "strip_size_kb": 64, 00:32:14.397 "state": "online", 00:32:14.397 "raid_level": "raid5f", 00:32:14.397 "superblock": false, 00:32:14.397 "num_base_bdevs": 4, 00:32:14.397 "num_base_bdevs_discovered": 4, 00:32:14.397 "num_base_bdevs_operational": 4, 00:32:14.397 "process": { 00:32:14.397 "type": "rebuild", 00:32:14.397 "target": "spare", 00:32:14.397 "progress": { 00:32:14.397 "blocks": 167040, 00:32:14.397 "percent": 84 00:32:14.397 } 00:32:14.397 }, 00:32:14.397 "base_bdevs_list": [ 00:32:14.397 { 00:32:14.397 "name": "spare", 00:32:14.397 "uuid": "481d253e-46fa-55e5-8fe6-41353ae773b7", 00:32:14.397 "is_configured": true, 00:32:14.397 "data_offset": 0, 00:32:14.397 "data_size": 65536 00:32:14.397 }, 00:32:14.397 { 00:32:14.397 "name": "BaseBdev2", 00:32:14.397 "uuid": "c37df8f9-edaf-5ae8-8222-42dabb3e5a69", 00:32:14.397 "is_configured": true, 00:32:14.397 "data_offset": 0, 00:32:14.397 "data_size": 65536 00:32:14.397 }, 00:32:14.397 { 00:32:14.397 "name": "BaseBdev3", 00:32:14.397 "uuid": "d043f65d-468a-57ea-b767-e3576fc2e42f", 00:32:14.397 "is_configured": true, 00:32:14.397 "data_offset": 0, 00:32:14.397 "data_size": 65536 00:32:14.397 }, 00:32:14.397 { 00:32:14.397 "name": "BaseBdev4", 00:32:14.397 "uuid": "0522ecc2-df08-547a-8b76-257b822ce4a9", 00:32:14.397 "is_configured": true, 00:32:14.397 "data_offset": 0, 00:32:14.397 "data_size": 65536 00:32:14.397 } 00:32:14.397 ] 00:32:14.397 }' 00:32:14.397 16:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:14.397 16:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:14.397 16:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:14.397 16:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:14.397 16:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:15.341 16:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:15.341 16:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:15.341 16:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:15.341 16:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:15.341 16:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:15.341 16:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:15.341 16:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:15.341 16:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:15.341 16:01:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.341 16:01:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:15.341 16:01:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.341 16:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:15.341 "name": "raid_bdev1", 00:32:15.341 "uuid": "755d189a-9118-46b5-aad8-c78411aacfc1", 00:32:15.341 "strip_size_kb": 64, 00:32:15.341 "state": "online", 00:32:15.341 "raid_level": "raid5f", 00:32:15.341 "superblock": false, 00:32:15.341 "num_base_bdevs": 4, 00:32:15.341 "num_base_bdevs_discovered": 4, 00:32:15.341 "num_base_bdevs_operational": 4, 00:32:15.341 "process": { 00:32:15.341 "type": "rebuild", 00:32:15.341 "target": "spare", 00:32:15.341 "progress": { 00:32:15.341 "blocks": 188160, 00:32:15.341 "percent": 95 00:32:15.341 } 00:32:15.341 }, 00:32:15.341 "base_bdevs_list": [ 00:32:15.341 { 00:32:15.341 "name": "spare", 00:32:15.341 "uuid": "481d253e-46fa-55e5-8fe6-41353ae773b7", 00:32:15.341 "is_configured": true, 00:32:15.341 "data_offset": 0, 00:32:15.341 "data_size": 65536 00:32:15.341 }, 00:32:15.341 { 00:32:15.341 "name": "BaseBdev2", 00:32:15.341 "uuid": "c37df8f9-edaf-5ae8-8222-42dabb3e5a69", 00:32:15.341 "is_configured": true, 00:32:15.341 "data_offset": 0, 00:32:15.341 "data_size": 65536 00:32:15.341 }, 00:32:15.341 { 00:32:15.341 "name": "BaseBdev3", 00:32:15.341 "uuid": "d043f65d-468a-57ea-b767-e3576fc2e42f", 00:32:15.341 "is_configured": true, 00:32:15.341 "data_offset": 0, 00:32:15.341 "data_size": 65536 00:32:15.341 }, 00:32:15.341 { 00:32:15.341 "name": "BaseBdev4", 00:32:15.341 "uuid": "0522ecc2-df08-547a-8b76-257b822ce4a9", 00:32:15.341 "is_configured": true, 00:32:15.341 "data_offset": 0, 00:32:15.341 "data_size": 65536 00:32:15.341 } 00:32:15.341 ] 00:32:15.341 }' 00:32:15.341 16:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:15.341 16:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:15.341 16:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:15.341 16:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:15.341 16:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:15.602 [2024-11-05 16:01:47.954283] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:32:15.602 [2024-11-05 16:01:47.954350] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:32:15.602 [2024-11-05 16:01:47.954397] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:16.177 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:16.177 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:16.177 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:16.177 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:16.177 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:16.177 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:16.437 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:16.437 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:16.437 16:01:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.437 16:01:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.437 16:01:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.437 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:16.437 "name": "raid_bdev1", 00:32:16.437 "uuid": "755d189a-9118-46b5-aad8-c78411aacfc1", 00:32:16.437 "strip_size_kb": 64, 00:32:16.437 "state": "online", 00:32:16.437 "raid_level": "raid5f", 00:32:16.438 "superblock": false, 00:32:16.438 "num_base_bdevs": 4, 00:32:16.438 "num_base_bdevs_discovered": 4, 00:32:16.438 "num_base_bdevs_operational": 4, 00:32:16.438 "base_bdevs_list": [ 00:32:16.438 { 00:32:16.438 "name": "spare", 00:32:16.438 "uuid": "481d253e-46fa-55e5-8fe6-41353ae773b7", 00:32:16.438 "is_configured": true, 00:32:16.438 "data_offset": 0, 00:32:16.438 "data_size": 65536 00:32:16.438 }, 00:32:16.438 { 00:32:16.438 "name": "BaseBdev2", 00:32:16.438 "uuid": "c37df8f9-edaf-5ae8-8222-42dabb3e5a69", 00:32:16.438 "is_configured": true, 00:32:16.438 "data_offset": 0, 00:32:16.438 "data_size": 65536 00:32:16.438 }, 00:32:16.438 { 00:32:16.438 "name": "BaseBdev3", 00:32:16.438 "uuid": "d043f65d-468a-57ea-b767-e3576fc2e42f", 00:32:16.438 "is_configured": true, 00:32:16.438 "data_offset": 0, 00:32:16.438 "data_size": 65536 00:32:16.438 }, 00:32:16.438 { 00:32:16.438 "name": "BaseBdev4", 00:32:16.438 "uuid": "0522ecc2-df08-547a-8b76-257b822ce4a9", 00:32:16.438 "is_configured": true, 00:32:16.438 "data_offset": 0, 00:32:16.438 "data_size": 65536 00:32:16.438 } 00:32:16.438 ] 00:32:16.438 }' 00:32:16.438 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:16.438 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:32:16.438 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:16.438 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:32:16.438 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:32:16.438 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:16.438 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:16.438 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:16.438 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:16.438 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:16.438 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:16.438 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:16.438 16:01:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.438 16:01:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.438 16:01:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.438 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:16.438 "name": "raid_bdev1", 00:32:16.438 "uuid": "755d189a-9118-46b5-aad8-c78411aacfc1", 00:32:16.438 "strip_size_kb": 64, 00:32:16.438 "state": "online", 00:32:16.438 "raid_level": "raid5f", 00:32:16.438 "superblock": false, 00:32:16.438 "num_base_bdevs": 4, 00:32:16.438 "num_base_bdevs_discovered": 4, 00:32:16.438 "num_base_bdevs_operational": 4, 00:32:16.438 "base_bdevs_list": [ 00:32:16.438 { 00:32:16.438 "name": "spare", 00:32:16.438 "uuid": "481d253e-46fa-55e5-8fe6-41353ae773b7", 00:32:16.438 "is_configured": true, 00:32:16.438 "data_offset": 0, 00:32:16.438 "data_size": 65536 00:32:16.438 }, 00:32:16.438 { 00:32:16.438 "name": "BaseBdev2", 00:32:16.438 "uuid": "c37df8f9-edaf-5ae8-8222-42dabb3e5a69", 00:32:16.438 "is_configured": true, 00:32:16.438 "data_offset": 0, 00:32:16.438 "data_size": 65536 00:32:16.438 }, 00:32:16.438 { 00:32:16.438 "name": "BaseBdev3", 00:32:16.438 "uuid": "d043f65d-468a-57ea-b767-e3576fc2e42f", 00:32:16.438 "is_configured": true, 00:32:16.438 "data_offset": 0, 00:32:16.438 "data_size": 65536 00:32:16.438 }, 00:32:16.438 { 00:32:16.438 "name": "BaseBdev4", 00:32:16.438 "uuid": "0522ecc2-df08-547a-8b76-257b822ce4a9", 00:32:16.438 "is_configured": true, 00:32:16.438 "data_offset": 0, 00:32:16.438 "data_size": 65536 00:32:16.438 } 00:32:16.438 ] 00:32:16.438 }' 00:32:16.438 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:16.438 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:16.438 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:16.438 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:16.438 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:32:16.438 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:16.438 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:16.438 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:16.438 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:16.438 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:16.438 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:16.438 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:16.438 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:16.438 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:16.438 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:16.438 16:01:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.438 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:16.438 16:01:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.438 16:01:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.438 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:16.438 "name": "raid_bdev1", 00:32:16.438 "uuid": "755d189a-9118-46b5-aad8-c78411aacfc1", 00:32:16.438 "strip_size_kb": 64, 00:32:16.438 "state": "online", 00:32:16.438 "raid_level": "raid5f", 00:32:16.438 "superblock": false, 00:32:16.438 "num_base_bdevs": 4, 00:32:16.438 "num_base_bdevs_discovered": 4, 00:32:16.438 "num_base_bdevs_operational": 4, 00:32:16.438 "base_bdevs_list": [ 00:32:16.438 { 00:32:16.438 "name": "spare", 00:32:16.438 "uuid": "481d253e-46fa-55e5-8fe6-41353ae773b7", 00:32:16.438 "is_configured": true, 00:32:16.438 "data_offset": 0, 00:32:16.438 "data_size": 65536 00:32:16.438 }, 00:32:16.438 { 00:32:16.438 "name": "BaseBdev2", 00:32:16.438 "uuid": "c37df8f9-edaf-5ae8-8222-42dabb3e5a69", 00:32:16.438 "is_configured": true, 00:32:16.438 "data_offset": 0, 00:32:16.438 "data_size": 65536 00:32:16.438 }, 00:32:16.438 { 00:32:16.438 "name": "BaseBdev3", 00:32:16.438 "uuid": "d043f65d-468a-57ea-b767-e3576fc2e42f", 00:32:16.438 "is_configured": true, 00:32:16.438 "data_offset": 0, 00:32:16.438 "data_size": 65536 00:32:16.438 }, 00:32:16.438 { 00:32:16.438 "name": "BaseBdev4", 00:32:16.438 "uuid": "0522ecc2-df08-547a-8b76-257b822ce4a9", 00:32:16.438 "is_configured": true, 00:32:16.438 "data_offset": 0, 00:32:16.438 "data_size": 65536 00:32:16.438 } 00:32:16.438 ] 00:32:16.438 }' 00:32:16.438 16:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:16.438 16:01:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.711 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:16.711 16:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.711 16:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.711 [2024-11-05 16:01:49.106700] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:16.711 [2024-11-05 16:01:49.106725] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:16.711 [2024-11-05 16:01:49.106788] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:16.711 [2024-11-05 16:01:49.106885] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:16.711 [2024-11-05 16:01:49.106895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:32:16.711 16:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.711 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:16.711 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:32:16.711 16:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.711 16:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:17.001 16:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.001 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:32:17.002 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:32:17.002 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:32:17.002 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:32:17.002 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:32:17.002 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:32:17.002 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:17.002 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:17.002 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:17.002 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:32:17.002 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:17.002 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:17.002 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:32:17.002 /dev/nbd0 00:32:17.002 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:17.002 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:17.002 16:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:32:17.002 16:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:32:17.002 16:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:32:17.002 16:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:32:17.002 16:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:32:17.002 16:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:32:17.002 16:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:32:17.002 16:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:32:17.002 16:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:17.002 1+0 records in 00:32:17.002 1+0 records out 00:32:17.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187272 s, 21.9 MB/s 00:32:17.002 16:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:17.002 16:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:32:17.002 16:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:17.002 16:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:32:17.002 16:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:32:17.002 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:17.002 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:17.002 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:32:17.262 /dev/nbd1 00:32:17.262 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:17.262 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:17.262 16:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:32:17.262 16:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:32:17.262 16:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:32:17.262 16:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:32:17.262 16:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:32:17.262 16:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:32:17.262 16:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:32:17.262 16:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:32:17.262 16:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:17.262 1+0 records in 00:32:17.262 1+0 records out 00:32:17.262 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000170219 s, 24.1 MB/s 00:32:17.262 16:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:17.262 16:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:32:17.262 16:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:17.262 16:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:32:17.262 16:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:32:17.262 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:17.263 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:17.263 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:32:17.263 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:32:17.263 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:32:17.263 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:17.263 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:17.263 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:32:17.263 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:17.263 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:32:17.522 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:17.522 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:17.522 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:17.522 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:17.522 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:17.522 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:17.522 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:32:17.522 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:32:17.522 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:17.522 16:01:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:32:17.782 16:01:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:17.782 16:01:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:17.782 16:01:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:17.782 16:01:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:17.782 16:01:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:17.782 16:01:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:17.782 16:01:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:32:17.782 16:01:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:32:17.782 16:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:32:17.782 16:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81874 00:32:17.782 16:01:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 81874 ']' 00:32:17.782 16:01:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 81874 00:32:17.782 16:01:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:32:17.782 16:01:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:17.782 16:01:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81874 00:32:17.782 killing process with pid 81874 00:32:17.782 Received shutdown signal, test time was about 60.000000 seconds 00:32:17.782 00:32:17.782 Latency(us) 00:32:17.782 [2024-11-05T16:01:50.197Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:17.782 [2024-11-05T16:01:50.197Z] =================================================================================================================== 00:32:17.782 [2024-11-05T16:01:50.197Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:17.782 16:01:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:17.782 16:01:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:17.782 16:01:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81874' 00:32:17.782 16:01:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 81874 00:32:17.782 [2024-11-05 16:01:50.110383] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:17.782 16:01:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 81874 00:32:18.042 [2024-11-05 16:01:50.341799] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:18.614 16:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:32:18.614 00:32:18.614 real 0m17.770s 00:32:18.614 user 0m20.702s 00:32:18.614 sys 0m1.786s 00:32:18.614 ************************************ 00:32:18.614 END TEST raid5f_rebuild_test 00:32:18.614 ************************************ 00:32:18.614 16:01:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:18.614 16:01:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:18.614 16:01:50 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:32:18.614 16:01:50 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:32:18.614 16:01:50 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:18.614 16:01:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:18.614 ************************************ 00:32:18.614 START TEST raid5f_rebuild_test_sb 00:32:18.614 ************************************ 00:32:18.614 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 true false true 00:32:18.614 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:32:18.614 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:32:18.614 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:32:18.614 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:32:18.614 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:32:18.614 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:32:18.614 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:18.614 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:32:18.614 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:32:18.614 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:18.614 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:32:18.614 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:32:18.614 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:18.614 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:32:18.614 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:32:18.614 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:18.614 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:32:18.614 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:32:18.614 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:18.615 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:32:18.615 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:32:18.615 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:32:18.615 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:32:18.615 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:32:18.615 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:32:18.615 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:32:18.615 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:32:18.615 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:32:18.615 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:32:18.615 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:32:18.615 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:32:18.615 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:32:18.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:18.615 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82373 00:32:18.615 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82373 00:32:18.615 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 82373 ']' 00:32:18.615 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:18.615 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:18.615 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:18.615 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:18.615 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:18.615 16:01:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:32:18.615 I/O size of 3145728 is greater than zero copy threshold (65536). 00:32:18.615 Zero copy mechanism will not be used. 00:32:18.615 [2024-11-05 16:01:50.988488] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:32:18.615 [2024-11-05 16:01:50.988600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82373 ] 00:32:18.876 [2024-11-05 16:01:51.148431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.876 [2024-11-05 16:01:51.242418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:19.136 [2024-11-05 16:01:51.378093] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:19.136 [2024-11-05 16:01:51.378146] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:19.741 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:19.741 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:32:19.741 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:19.741 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:32:19.741 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.741 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:19.741 BaseBdev1_malloc 00:32:19.741 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.741 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:19.741 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.741 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:19.741 [2024-11-05 16:01:51.856598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:19.741 [2024-11-05 16:01:51.856659] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:19.741 [2024-11-05 16:01:51.856676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:32:19.741 [2024-11-05 16:01:51.856687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:19.741 [2024-11-05 16:01:51.858802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:19.741 [2024-11-05 16:01:51.858959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:19.741 BaseBdev1 00:32:19.741 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.741 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:19.741 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:32:19.741 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.741 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:19.741 BaseBdev2_malloc 00:32:19.741 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.741 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:32:19.741 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.742 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:19.742 [2024-11-05 16:01:51.892071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:32:19.742 [2024-11-05 16:01:51.892118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:19.742 [2024-11-05 16:01:51.892133] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:32:19.742 [2024-11-05 16:01:51.892144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:19.742 [2024-11-05 16:01:51.894179] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:19.742 [2024-11-05 16:01:51.894310] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:19.742 BaseBdev2 00:32:19.742 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.742 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:19.742 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:32:19.742 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.742 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:19.742 BaseBdev3_malloc 00:32:19.742 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.742 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:32:19.742 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.742 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:19.742 [2024-11-05 16:01:51.936320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:32:19.742 [2024-11-05 16:01:51.936368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:19.742 [2024-11-05 16:01:51.936385] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:32:19.742 [2024-11-05 16:01:51.936396] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:19.742 [2024-11-05 16:01:51.938441] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:19.742 [2024-11-05 16:01:51.938476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:32:19.742 BaseBdev3 00:32:19.742 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.742 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:19.742 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:32:19.742 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.742 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:19.742 BaseBdev4_malloc 00:32:19.742 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.742 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:32:19.742 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.742 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:19.742 [2024-11-05 16:01:51.971692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:32:19.742 [2024-11-05 16:01:51.971832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:19.742 [2024-11-05 16:01:51.971865] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:32:19.742 [2024-11-05 16:01:51.971875] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:19.742 [2024-11-05 16:01:51.973905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:19.742 [2024-11-05 16:01:51.973938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:32:19.742 BaseBdev4 00:32:19.742 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.742 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:32:19.742 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.742 16:01:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:19.742 spare_malloc 00:32:19.742 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.742 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:32:19.742 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.742 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:19.742 spare_delay 00:32:19.742 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.742 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:32:19.742 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.742 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:19.742 [2024-11-05 16:01:52.015095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:19.742 [2024-11-05 16:01:52.015145] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:19.742 [2024-11-05 16:01:52.015161] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:32:19.742 [2024-11-05 16:01:52.015171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:19.742 [2024-11-05 16:01:52.017242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:19.742 [2024-11-05 16:01:52.017275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:19.742 spare 00:32:19.742 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.742 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:32:19.742 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.742 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:19.742 [2024-11-05 16:01:52.023156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:19.742 [2024-11-05 16:01:52.024973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:19.742 [2024-11-05 16:01:52.025025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:19.742 [2024-11-05 16:01:52.025075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:19.742 [2024-11-05 16:01:52.025248] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:32:19.742 [2024-11-05 16:01:52.025266] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:32:19.742 [2024-11-05 16:01:52.025501] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:32:19.742 [2024-11-05 16:01:52.030421] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:32:19.742 [2024-11-05 16:01:52.030546] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:32:19.742 [2024-11-05 16:01:52.030728] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:19.742 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.742 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:32:19.742 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:19.742 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:19.742 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:19.742 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:19.742 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:19.742 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:19.742 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:19.742 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:19.742 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:19.742 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:19.742 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:19.742 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.742 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:19.742 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.742 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:19.742 "name": "raid_bdev1", 00:32:19.742 "uuid": "29e46e6c-113a-4993-98a6-5714b9ecdc61", 00:32:19.742 "strip_size_kb": 64, 00:32:19.742 "state": "online", 00:32:19.742 "raid_level": "raid5f", 00:32:19.742 "superblock": true, 00:32:19.742 "num_base_bdevs": 4, 00:32:19.742 "num_base_bdevs_discovered": 4, 00:32:19.742 "num_base_bdevs_operational": 4, 00:32:19.742 "base_bdevs_list": [ 00:32:19.742 { 00:32:19.742 "name": "BaseBdev1", 00:32:19.742 "uuid": "74f101f6-9ded-546a-9b95-69202a015937", 00:32:19.742 "is_configured": true, 00:32:19.742 "data_offset": 2048, 00:32:19.742 "data_size": 63488 00:32:19.742 }, 00:32:19.742 { 00:32:19.742 "name": "BaseBdev2", 00:32:19.742 "uuid": "97dfd639-96e2-57c9-b27a-377ae8612b40", 00:32:19.742 "is_configured": true, 00:32:19.742 "data_offset": 2048, 00:32:19.742 "data_size": 63488 00:32:19.742 }, 00:32:19.742 { 00:32:19.742 "name": "BaseBdev3", 00:32:19.742 "uuid": "6b5def09-3ac4-5622-8780-d45dd6002773", 00:32:19.742 "is_configured": true, 00:32:19.742 "data_offset": 2048, 00:32:19.742 "data_size": 63488 00:32:19.742 }, 00:32:19.742 { 00:32:19.742 "name": "BaseBdev4", 00:32:19.742 "uuid": "a05ab79b-e9fa-58f2-8625-28a3e858755c", 00:32:19.742 "is_configured": true, 00:32:19.742 "data_offset": 2048, 00:32:19.742 "data_size": 63488 00:32:19.742 } 00:32:19.742 ] 00:32:19.742 }' 00:32:19.742 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:19.742 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:20.004 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:20.004 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.004 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:20.004 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:32:20.004 [2024-11-05 16:01:52.356210] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:20.004 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.004 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:32:20.004 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:20.004 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:32:20.004 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.004 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:20.004 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.004 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:32:20.004 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:32:20.004 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:32:20.004 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:32:20.004 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:32:20.004 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:32:20.004 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:32:20.004 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:20.004 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:32:20.004 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:20.004 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:32:20.004 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:20.004 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:20.005 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:32:20.265 [2024-11-05 16:01:52.600073] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:32:20.265 /dev/nbd0 00:32:20.265 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:20.265 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:20.265 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:32:20.265 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:32:20.265 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:32:20.265 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:32:20.265 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:32:20.265 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:32:20.265 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:32:20.265 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:32:20.265 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:20.265 1+0 records in 00:32:20.265 1+0 records out 00:32:20.265 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224252 s, 18.3 MB/s 00:32:20.265 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:20.265 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:32:20.265 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:20.266 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:32:20.266 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:32:20.266 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:20.266 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:20.266 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:32:20.266 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:32:20.266 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:32:20.266 16:01:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:32:20.837 496+0 records in 00:32:20.837 496+0 records out 00:32:20.837 97517568 bytes (98 MB, 93 MiB) copied, 0.500968 s, 195 MB/s 00:32:20.837 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:32:20.837 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:32:20.837 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:20.837 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:20.837 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:32:20.837 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:20.837 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:32:21.098 [2024-11-05 16:01:53.351553] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:21.098 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:21.098 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:21.098 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:21.098 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:21.098 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:21.098 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:21.098 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:32:21.098 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:32:21.098 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:32:21.099 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.099 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:21.099 [2024-11-05 16:01:53.384902] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:21.099 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.099 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:21.099 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:21.099 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:21.099 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:21.099 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:21.099 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:21.099 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:21.099 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:21.099 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:21.099 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:21.099 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:21.099 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:21.099 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.099 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:21.099 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.099 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:21.099 "name": "raid_bdev1", 00:32:21.099 "uuid": "29e46e6c-113a-4993-98a6-5714b9ecdc61", 00:32:21.099 "strip_size_kb": 64, 00:32:21.099 "state": "online", 00:32:21.099 "raid_level": "raid5f", 00:32:21.099 "superblock": true, 00:32:21.099 "num_base_bdevs": 4, 00:32:21.099 "num_base_bdevs_discovered": 3, 00:32:21.099 "num_base_bdevs_operational": 3, 00:32:21.099 "base_bdevs_list": [ 00:32:21.099 { 00:32:21.099 "name": null, 00:32:21.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:21.099 "is_configured": false, 00:32:21.099 "data_offset": 0, 00:32:21.099 "data_size": 63488 00:32:21.099 }, 00:32:21.099 { 00:32:21.099 "name": "BaseBdev2", 00:32:21.099 "uuid": "97dfd639-96e2-57c9-b27a-377ae8612b40", 00:32:21.099 "is_configured": true, 00:32:21.099 "data_offset": 2048, 00:32:21.099 "data_size": 63488 00:32:21.099 }, 00:32:21.099 { 00:32:21.099 "name": "BaseBdev3", 00:32:21.099 "uuid": "6b5def09-3ac4-5622-8780-d45dd6002773", 00:32:21.099 "is_configured": true, 00:32:21.099 "data_offset": 2048, 00:32:21.099 "data_size": 63488 00:32:21.099 }, 00:32:21.099 { 00:32:21.099 "name": "BaseBdev4", 00:32:21.099 "uuid": "a05ab79b-e9fa-58f2-8625-28a3e858755c", 00:32:21.099 "is_configured": true, 00:32:21.099 "data_offset": 2048, 00:32:21.099 "data_size": 63488 00:32:21.099 } 00:32:21.099 ] 00:32:21.099 }' 00:32:21.099 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:21.099 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:21.360 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:32:21.360 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.360 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:21.360 [2024-11-05 16:01:53.688971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:21.360 [2024-11-05 16:01:53.699073] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:32:21.360 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.360 16:01:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:32:21.360 [2024-11-05 16:01:53.706960] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:22.300 16:01:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:22.300 16:01:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:22.300 16:01:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:22.300 16:01:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:22.300 16:01:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:22.300 16:01:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:22.300 16:01:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:22.300 16:01:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.300 16:01:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:22.559 16:01:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.559 16:01:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:22.559 "name": "raid_bdev1", 00:32:22.559 "uuid": "29e46e6c-113a-4993-98a6-5714b9ecdc61", 00:32:22.559 "strip_size_kb": 64, 00:32:22.559 "state": "online", 00:32:22.559 "raid_level": "raid5f", 00:32:22.559 "superblock": true, 00:32:22.559 "num_base_bdevs": 4, 00:32:22.559 "num_base_bdevs_discovered": 4, 00:32:22.559 "num_base_bdevs_operational": 4, 00:32:22.559 "process": { 00:32:22.559 "type": "rebuild", 00:32:22.559 "target": "spare", 00:32:22.559 "progress": { 00:32:22.559 "blocks": 17280, 00:32:22.559 "percent": 9 00:32:22.559 } 00:32:22.559 }, 00:32:22.559 "base_bdevs_list": [ 00:32:22.559 { 00:32:22.559 "name": "spare", 00:32:22.559 "uuid": "fce067cb-d0e3-5144-83ed-ac67e0cd497b", 00:32:22.559 "is_configured": true, 00:32:22.559 "data_offset": 2048, 00:32:22.559 "data_size": 63488 00:32:22.559 }, 00:32:22.559 { 00:32:22.559 "name": "BaseBdev2", 00:32:22.559 "uuid": "97dfd639-96e2-57c9-b27a-377ae8612b40", 00:32:22.559 "is_configured": true, 00:32:22.559 "data_offset": 2048, 00:32:22.559 "data_size": 63488 00:32:22.559 }, 00:32:22.559 { 00:32:22.559 "name": "BaseBdev3", 00:32:22.559 "uuid": "6b5def09-3ac4-5622-8780-d45dd6002773", 00:32:22.559 "is_configured": true, 00:32:22.559 "data_offset": 2048, 00:32:22.559 "data_size": 63488 00:32:22.559 }, 00:32:22.559 { 00:32:22.559 "name": "BaseBdev4", 00:32:22.559 "uuid": "a05ab79b-e9fa-58f2-8625-28a3e858755c", 00:32:22.559 "is_configured": true, 00:32:22.559 "data_offset": 2048, 00:32:22.559 "data_size": 63488 00:32:22.559 } 00:32:22.559 ] 00:32:22.559 }' 00:32:22.559 16:01:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:22.559 16:01:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:22.559 16:01:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:22.559 16:01:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:22.559 16:01:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:32:22.559 16:01:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.559 16:01:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:22.559 [2024-11-05 16:01:54.799849] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:22.559 [2024-11-05 16:01:54.814238] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:22.559 [2024-11-05 16:01:54.814297] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:22.559 [2024-11-05 16:01:54.814312] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:22.559 [2024-11-05 16:01:54.814320] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:22.559 16:01:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.559 16:01:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:22.559 16:01:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:22.559 16:01:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:22.559 16:01:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:22.559 16:01:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:22.559 16:01:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:22.559 16:01:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:22.559 16:01:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:22.559 16:01:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:22.559 16:01:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:22.559 16:01:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:22.559 16:01:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:22.559 16:01:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.559 16:01:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:22.559 16:01:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.559 16:01:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:22.559 "name": "raid_bdev1", 00:32:22.559 "uuid": "29e46e6c-113a-4993-98a6-5714b9ecdc61", 00:32:22.559 "strip_size_kb": 64, 00:32:22.559 "state": "online", 00:32:22.559 "raid_level": "raid5f", 00:32:22.559 "superblock": true, 00:32:22.559 "num_base_bdevs": 4, 00:32:22.559 "num_base_bdevs_discovered": 3, 00:32:22.559 "num_base_bdevs_operational": 3, 00:32:22.559 "base_bdevs_list": [ 00:32:22.559 { 00:32:22.559 "name": null, 00:32:22.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:22.559 "is_configured": false, 00:32:22.559 "data_offset": 0, 00:32:22.559 "data_size": 63488 00:32:22.559 }, 00:32:22.559 { 00:32:22.559 "name": "BaseBdev2", 00:32:22.559 "uuid": "97dfd639-96e2-57c9-b27a-377ae8612b40", 00:32:22.559 "is_configured": true, 00:32:22.559 "data_offset": 2048, 00:32:22.559 "data_size": 63488 00:32:22.559 }, 00:32:22.559 { 00:32:22.559 "name": "BaseBdev3", 00:32:22.559 "uuid": "6b5def09-3ac4-5622-8780-d45dd6002773", 00:32:22.559 "is_configured": true, 00:32:22.559 "data_offset": 2048, 00:32:22.559 "data_size": 63488 00:32:22.559 }, 00:32:22.559 { 00:32:22.559 "name": "BaseBdev4", 00:32:22.559 "uuid": "a05ab79b-e9fa-58f2-8625-28a3e858755c", 00:32:22.559 "is_configured": true, 00:32:22.559 "data_offset": 2048, 00:32:22.559 "data_size": 63488 00:32:22.559 } 00:32:22.559 ] 00:32:22.559 }' 00:32:22.559 16:01:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:22.559 16:01:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:22.817 16:01:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:22.817 16:01:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:22.817 16:01:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:22.817 16:01:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:22.817 16:01:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:22.817 16:01:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:22.817 16:01:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.817 16:01:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:22.817 16:01:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:22.817 16:01:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.817 16:01:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:22.817 "name": "raid_bdev1", 00:32:22.817 "uuid": "29e46e6c-113a-4993-98a6-5714b9ecdc61", 00:32:22.817 "strip_size_kb": 64, 00:32:22.817 "state": "online", 00:32:22.817 "raid_level": "raid5f", 00:32:22.817 "superblock": true, 00:32:22.817 "num_base_bdevs": 4, 00:32:22.817 "num_base_bdevs_discovered": 3, 00:32:22.817 "num_base_bdevs_operational": 3, 00:32:22.817 "base_bdevs_list": [ 00:32:22.817 { 00:32:22.817 "name": null, 00:32:22.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:22.817 "is_configured": false, 00:32:22.817 "data_offset": 0, 00:32:22.817 "data_size": 63488 00:32:22.817 }, 00:32:22.817 { 00:32:22.817 "name": "BaseBdev2", 00:32:22.817 "uuid": "97dfd639-96e2-57c9-b27a-377ae8612b40", 00:32:22.817 "is_configured": true, 00:32:22.817 "data_offset": 2048, 00:32:22.817 "data_size": 63488 00:32:22.817 }, 00:32:22.817 { 00:32:22.817 "name": "BaseBdev3", 00:32:22.817 "uuid": "6b5def09-3ac4-5622-8780-d45dd6002773", 00:32:22.817 "is_configured": true, 00:32:22.817 "data_offset": 2048, 00:32:22.817 "data_size": 63488 00:32:22.817 }, 00:32:22.817 { 00:32:22.817 "name": "BaseBdev4", 00:32:22.817 "uuid": "a05ab79b-e9fa-58f2-8625-28a3e858755c", 00:32:22.817 "is_configured": true, 00:32:22.817 "data_offset": 2048, 00:32:22.817 "data_size": 63488 00:32:22.817 } 00:32:22.817 ] 00:32:22.817 }' 00:32:22.817 16:01:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:22.817 16:01:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:22.817 16:01:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:23.077 16:01:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:23.077 16:01:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:32:23.077 16:01:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.077 16:01:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:23.077 [2024-11-05 16:01:55.242576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:23.077 [2024-11-05 16:01:55.250101] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:32:23.077 16:01:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.077 16:01:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:32:23.077 [2024-11-05 16:01:55.255205] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:24.017 16:01:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:24.017 16:01:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:24.017 16:01:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:24.017 16:01:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:24.017 16:01:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:24.017 16:01:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:24.017 16:01:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:24.017 16:01:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.017 16:01:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:24.017 16:01:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.017 16:01:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:24.017 "name": "raid_bdev1", 00:32:24.017 "uuid": "29e46e6c-113a-4993-98a6-5714b9ecdc61", 00:32:24.017 "strip_size_kb": 64, 00:32:24.017 "state": "online", 00:32:24.017 "raid_level": "raid5f", 00:32:24.017 "superblock": true, 00:32:24.017 "num_base_bdevs": 4, 00:32:24.017 "num_base_bdevs_discovered": 4, 00:32:24.017 "num_base_bdevs_operational": 4, 00:32:24.017 "process": { 00:32:24.017 "type": "rebuild", 00:32:24.017 "target": "spare", 00:32:24.017 "progress": { 00:32:24.017 "blocks": 17280, 00:32:24.017 "percent": 9 00:32:24.017 } 00:32:24.017 }, 00:32:24.017 "base_bdevs_list": [ 00:32:24.017 { 00:32:24.017 "name": "spare", 00:32:24.017 "uuid": "fce067cb-d0e3-5144-83ed-ac67e0cd497b", 00:32:24.017 "is_configured": true, 00:32:24.017 "data_offset": 2048, 00:32:24.017 "data_size": 63488 00:32:24.017 }, 00:32:24.017 { 00:32:24.017 "name": "BaseBdev2", 00:32:24.017 "uuid": "97dfd639-96e2-57c9-b27a-377ae8612b40", 00:32:24.017 "is_configured": true, 00:32:24.017 "data_offset": 2048, 00:32:24.017 "data_size": 63488 00:32:24.017 }, 00:32:24.017 { 00:32:24.017 "name": "BaseBdev3", 00:32:24.017 "uuid": "6b5def09-3ac4-5622-8780-d45dd6002773", 00:32:24.017 "is_configured": true, 00:32:24.017 "data_offset": 2048, 00:32:24.017 "data_size": 63488 00:32:24.017 }, 00:32:24.017 { 00:32:24.017 "name": "BaseBdev4", 00:32:24.017 "uuid": "a05ab79b-e9fa-58f2-8625-28a3e858755c", 00:32:24.017 "is_configured": true, 00:32:24.017 "data_offset": 2048, 00:32:24.017 "data_size": 63488 00:32:24.017 } 00:32:24.017 ] 00:32:24.017 }' 00:32:24.017 16:01:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:24.017 16:01:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:24.017 16:01:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:24.017 16:01:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:24.017 16:01:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:32:24.017 16:01:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:32:24.017 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:32:24.017 16:01:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:32:24.017 16:01:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:32:24.017 16:01:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=492 00:32:24.017 16:01:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:24.017 16:01:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:24.017 16:01:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:24.017 16:01:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:24.017 16:01:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:24.017 16:01:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:24.017 16:01:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:24.017 16:01:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:24.017 16:01:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.017 16:01:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:24.017 16:01:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.017 16:01:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:24.017 "name": "raid_bdev1", 00:32:24.017 "uuid": "29e46e6c-113a-4993-98a6-5714b9ecdc61", 00:32:24.017 "strip_size_kb": 64, 00:32:24.017 "state": "online", 00:32:24.017 "raid_level": "raid5f", 00:32:24.017 "superblock": true, 00:32:24.017 "num_base_bdevs": 4, 00:32:24.017 "num_base_bdevs_discovered": 4, 00:32:24.017 "num_base_bdevs_operational": 4, 00:32:24.017 "process": { 00:32:24.017 "type": "rebuild", 00:32:24.017 "target": "spare", 00:32:24.017 "progress": { 00:32:24.017 "blocks": 19200, 00:32:24.017 "percent": 10 00:32:24.017 } 00:32:24.017 }, 00:32:24.017 "base_bdevs_list": [ 00:32:24.017 { 00:32:24.017 "name": "spare", 00:32:24.017 "uuid": "fce067cb-d0e3-5144-83ed-ac67e0cd497b", 00:32:24.017 "is_configured": true, 00:32:24.017 "data_offset": 2048, 00:32:24.017 "data_size": 63488 00:32:24.017 }, 00:32:24.017 { 00:32:24.017 "name": "BaseBdev2", 00:32:24.017 "uuid": "97dfd639-96e2-57c9-b27a-377ae8612b40", 00:32:24.017 "is_configured": true, 00:32:24.017 "data_offset": 2048, 00:32:24.017 "data_size": 63488 00:32:24.017 }, 00:32:24.017 { 00:32:24.017 "name": "BaseBdev3", 00:32:24.017 "uuid": "6b5def09-3ac4-5622-8780-d45dd6002773", 00:32:24.017 "is_configured": true, 00:32:24.017 "data_offset": 2048, 00:32:24.017 "data_size": 63488 00:32:24.017 }, 00:32:24.017 { 00:32:24.017 "name": "BaseBdev4", 00:32:24.017 "uuid": "a05ab79b-e9fa-58f2-8625-28a3e858755c", 00:32:24.017 "is_configured": true, 00:32:24.017 "data_offset": 2048, 00:32:24.017 "data_size": 63488 00:32:24.017 } 00:32:24.017 ] 00:32:24.017 }' 00:32:24.017 16:01:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:24.017 16:01:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:24.017 16:01:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:24.278 16:01:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:24.278 16:01:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:25.220 16:01:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:25.220 16:01:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:25.220 16:01:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:25.220 16:01:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:25.220 16:01:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:25.220 16:01:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:25.220 16:01:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:25.220 16:01:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.220 16:01:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:25.220 16:01:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:25.220 16:01:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.220 16:01:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:25.220 "name": "raid_bdev1", 00:32:25.220 "uuid": "29e46e6c-113a-4993-98a6-5714b9ecdc61", 00:32:25.220 "strip_size_kb": 64, 00:32:25.220 "state": "online", 00:32:25.220 "raid_level": "raid5f", 00:32:25.220 "superblock": true, 00:32:25.220 "num_base_bdevs": 4, 00:32:25.220 "num_base_bdevs_discovered": 4, 00:32:25.220 "num_base_bdevs_operational": 4, 00:32:25.220 "process": { 00:32:25.220 "type": "rebuild", 00:32:25.220 "target": "spare", 00:32:25.220 "progress": { 00:32:25.220 "blocks": 40320, 00:32:25.220 "percent": 21 00:32:25.220 } 00:32:25.220 }, 00:32:25.220 "base_bdevs_list": [ 00:32:25.220 { 00:32:25.220 "name": "spare", 00:32:25.220 "uuid": "fce067cb-d0e3-5144-83ed-ac67e0cd497b", 00:32:25.220 "is_configured": true, 00:32:25.220 "data_offset": 2048, 00:32:25.220 "data_size": 63488 00:32:25.220 }, 00:32:25.220 { 00:32:25.220 "name": "BaseBdev2", 00:32:25.220 "uuid": "97dfd639-96e2-57c9-b27a-377ae8612b40", 00:32:25.220 "is_configured": true, 00:32:25.220 "data_offset": 2048, 00:32:25.220 "data_size": 63488 00:32:25.220 }, 00:32:25.220 { 00:32:25.220 "name": "BaseBdev3", 00:32:25.220 "uuid": "6b5def09-3ac4-5622-8780-d45dd6002773", 00:32:25.220 "is_configured": true, 00:32:25.220 "data_offset": 2048, 00:32:25.220 "data_size": 63488 00:32:25.220 }, 00:32:25.220 { 00:32:25.220 "name": "BaseBdev4", 00:32:25.220 "uuid": "a05ab79b-e9fa-58f2-8625-28a3e858755c", 00:32:25.220 "is_configured": true, 00:32:25.220 "data_offset": 2048, 00:32:25.220 "data_size": 63488 00:32:25.220 } 00:32:25.220 ] 00:32:25.220 }' 00:32:25.220 16:01:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:25.220 16:01:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:25.220 16:01:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:25.220 16:01:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:25.220 16:01:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:26.163 16:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:26.163 16:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:26.163 16:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:26.163 16:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:26.163 16:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:26.163 16:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:26.163 16:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:26.163 16:01:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.163 16:01:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:26.163 16:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:26.163 16:01:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.163 16:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:26.163 "name": "raid_bdev1", 00:32:26.163 "uuid": "29e46e6c-113a-4993-98a6-5714b9ecdc61", 00:32:26.163 "strip_size_kb": 64, 00:32:26.163 "state": "online", 00:32:26.163 "raid_level": "raid5f", 00:32:26.163 "superblock": true, 00:32:26.163 "num_base_bdevs": 4, 00:32:26.163 "num_base_bdevs_discovered": 4, 00:32:26.163 "num_base_bdevs_operational": 4, 00:32:26.163 "process": { 00:32:26.163 "type": "rebuild", 00:32:26.163 "target": "spare", 00:32:26.163 "progress": { 00:32:26.163 "blocks": 61440, 00:32:26.163 "percent": 32 00:32:26.163 } 00:32:26.163 }, 00:32:26.163 "base_bdevs_list": [ 00:32:26.163 { 00:32:26.163 "name": "spare", 00:32:26.163 "uuid": "fce067cb-d0e3-5144-83ed-ac67e0cd497b", 00:32:26.163 "is_configured": true, 00:32:26.163 "data_offset": 2048, 00:32:26.163 "data_size": 63488 00:32:26.163 }, 00:32:26.163 { 00:32:26.163 "name": "BaseBdev2", 00:32:26.163 "uuid": "97dfd639-96e2-57c9-b27a-377ae8612b40", 00:32:26.163 "is_configured": true, 00:32:26.163 "data_offset": 2048, 00:32:26.163 "data_size": 63488 00:32:26.163 }, 00:32:26.163 { 00:32:26.163 "name": "BaseBdev3", 00:32:26.163 "uuid": "6b5def09-3ac4-5622-8780-d45dd6002773", 00:32:26.163 "is_configured": true, 00:32:26.163 "data_offset": 2048, 00:32:26.163 "data_size": 63488 00:32:26.163 }, 00:32:26.163 { 00:32:26.163 "name": "BaseBdev4", 00:32:26.163 "uuid": "a05ab79b-e9fa-58f2-8625-28a3e858755c", 00:32:26.163 "is_configured": true, 00:32:26.163 "data_offset": 2048, 00:32:26.163 "data_size": 63488 00:32:26.163 } 00:32:26.163 ] 00:32:26.163 }' 00:32:26.163 16:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:26.424 16:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:26.424 16:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:26.424 16:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:26.424 16:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:27.367 16:01:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:27.367 16:01:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:27.367 16:01:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:27.367 16:01:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:27.367 16:01:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:27.367 16:01:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:27.367 16:01:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:27.367 16:01:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.367 16:01:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:27.367 16:01:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:27.367 16:01:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.367 16:01:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:27.367 "name": "raid_bdev1", 00:32:27.367 "uuid": "29e46e6c-113a-4993-98a6-5714b9ecdc61", 00:32:27.367 "strip_size_kb": 64, 00:32:27.367 "state": "online", 00:32:27.367 "raid_level": "raid5f", 00:32:27.367 "superblock": true, 00:32:27.367 "num_base_bdevs": 4, 00:32:27.367 "num_base_bdevs_discovered": 4, 00:32:27.367 "num_base_bdevs_operational": 4, 00:32:27.367 "process": { 00:32:27.367 "type": "rebuild", 00:32:27.367 "target": "spare", 00:32:27.367 "progress": { 00:32:27.367 "blocks": 82560, 00:32:27.367 "percent": 43 00:32:27.367 } 00:32:27.367 }, 00:32:27.367 "base_bdevs_list": [ 00:32:27.367 { 00:32:27.367 "name": "spare", 00:32:27.367 "uuid": "fce067cb-d0e3-5144-83ed-ac67e0cd497b", 00:32:27.367 "is_configured": true, 00:32:27.367 "data_offset": 2048, 00:32:27.367 "data_size": 63488 00:32:27.367 }, 00:32:27.367 { 00:32:27.367 "name": "BaseBdev2", 00:32:27.367 "uuid": "97dfd639-96e2-57c9-b27a-377ae8612b40", 00:32:27.367 "is_configured": true, 00:32:27.367 "data_offset": 2048, 00:32:27.367 "data_size": 63488 00:32:27.367 }, 00:32:27.367 { 00:32:27.367 "name": "BaseBdev3", 00:32:27.367 "uuid": "6b5def09-3ac4-5622-8780-d45dd6002773", 00:32:27.367 "is_configured": true, 00:32:27.367 "data_offset": 2048, 00:32:27.367 "data_size": 63488 00:32:27.367 }, 00:32:27.367 { 00:32:27.367 "name": "BaseBdev4", 00:32:27.367 "uuid": "a05ab79b-e9fa-58f2-8625-28a3e858755c", 00:32:27.367 "is_configured": true, 00:32:27.367 "data_offset": 2048, 00:32:27.367 "data_size": 63488 00:32:27.367 } 00:32:27.367 ] 00:32:27.367 }' 00:32:27.367 16:01:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:27.367 16:01:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:27.367 16:01:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:27.367 16:01:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:27.367 16:01:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:28.752 16:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:28.752 16:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:28.752 16:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:28.752 16:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:28.752 16:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:28.752 16:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:28.752 16:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:28.752 16:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:28.752 16:02:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.752 16:02:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:28.752 16:02:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.752 16:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:28.752 "name": "raid_bdev1", 00:32:28.752 "uuid": "29e46e6c-113a-4993-98a6-5714b9ecdc61", 00:32:28.752 "strip_size_kb": 64, 00:32:28.752 "state": "online", 00:32:28.752 "raid_level": "raid5f", 00:32:28.752 "superblock": true, 00:32:28.752 "num_base_bdevs": 4, 00:32:28.752 "num_base_bdevs_discovered": 4, 00:32:28.752 "num_base_bdevs_operational": 4, 00:32:28.752 "process": { 00:32:28.752 "type": "rebuild", 00:32:28.752 "target": "spare", 00:32:28.752 "progress": { 00:32:28.752 "blocks": 103680, 00:32:28.752 "percent": 54 00:32:28.752 } 00:32:28.752 }, 00:32:28.752 "base_bdevs_list": [ 00:32:28.752 { 00:32:28.752 "name": "spare", 00:32:28.752 "uuid": "fce067cb-d0e3-5144-83ed-ac67e0cd497b", 00:32:28.752 "is_configured": true, 00:32:28.752 "data_offset": 2048, 00:32:28.752 "data_size": 63488 00:32:28.752 }, 00:32:28.752 { 00:32:28.752 "name": "BaseBdev2", 00:32:28.752 "uuid": "97dfd639-96e2-57c9-b27a-377ae8612b40", 00:32:28.752 "is_configured": true, 00:32:28.752 "data_offset": 2048, 00:32:28.752 "data_size": 63488 00:32:28.752 }, 00:32:28.752 { 00:32:28.752 "name": "BaseBdev3", 00:32:28.752 "uuid": "6b5def09-3ac4-5622-8780-d45dd6002773", 00:32:28.752 "is_configured": true, 00:32:28.752 "data_offset": 2048, 00:32:28.752 "data_size": 63488 00:32:28.752 }, 00:32:28.752 { 00:32:28.752 "name": "BaseBdev4", 00:32:28.752 "uuid": "a05ab79b-e9fa-58f2-8625-28a3e858755c", 00:32:28.752 "is_configured": true, 00:32:28.752 "data_offset": 2048, 00:32:28.752 "data_size": 63488 00:32:28.752 } 00:32:28.752 ] 00:32:28.752 }' 00:32:28.752 16:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:28.752 16:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:28.752 16:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:28.752 16:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:28.752 16:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:29.696 16:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:29.696 16:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:29.696 16:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:29.696 16:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:29.696 16:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:29.696 16:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:29.696 16:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:29.696 16:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:29.696 16:02:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.696 16:02:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:29.696 16:02:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.696 16:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:29.696 "name": "raid_bdev1", 00:32:29.696 "uuid": "29e46e6c-113a-4993-98a6-5714b9ecdc61", 00:32:29.696 "strip_size_kb": 64, 00:32:29.696 "state": "online", 00:32:29.696 "raid_level": "raid5f", 00:32:29.696 "superblock": true, 00:32:29.696 "num_base_bdevs": 4, 00:32:29.696 "num_base_bdevs_discovered": 4, 00:32:29.696 "num_base_bdevs_operational": 4, 00:32:29.696 "process": { 00:32:29.696 "type": "rebuild", 00:32:29.696 "target": "spare", 00:32:29.696 "progress": { 00:32:29.696 "blocks": 124800, 00:32:29.696 "percent": 65 00:32:29.696 } 00:32:29.696 }, 00:32:29.696 "base_bdevs_list": [ 00:32:29.696 { 00:32:29.696 "name": "spare", 00:32:29.696 "uuid": "fce067cb-d0e3-5144-83ed-ac67e0cd497b", 00:32:29.696 "is_configured": true, 00:32:29.696 "data_offset": 2048, 00:32:29.696 "data_size": 63488 00:32:29.696 }, 00:32:29.696 { 00:32:29.696 "name": "BaseBdev2", 00:32:29.696 "uuid": "97dfd639-96e2-57c9-b27a-377ae8612b40", 00:32:29.696 "is_configured": true, 00:32:29.696 "data_offset": 2048, 00:32:29.696 "data_size": 63488 00:32:29.696 }, 00:32:29.696 { 00:32:29.696 "name": "BaseBdev3", 00:32:29.696 "uuid": "6b5def09-3ac4-5622-8780-d45dd6002773", 00:32:29.696 "is_configured": true, 00:32:29.696 "data_offset": 2048, 00:32:29.696 "data_size": 63488 00:32:29.696 }, 00:32:29.696 { 00:32:29.696 "name": "BaseBdev4", 00:32:29.696 "uuid": "a05ab79b-e9fa-58f2-8625-28a3e858755c", 00:32:29.696 "is_configured": true, 00:32:29.696 "data_offset": 2048, 00:32:29.696 "data_size": 63488 00:32:29.696 } 00:32:29.696 ] 00:32:29.696 }' 00:32:29.696 16:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:29.696 16:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:29.696 16:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:29.696 16:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:29.696 16:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:30.657 16:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:30.657 16:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:30.657 16:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:30.657 16:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:30.657 16:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:30.657 16:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:30.657 16:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:30.657 16:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.657 16:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:30.657 16:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:30.657 16:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.657 16:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:30.657 "name": "raid_bdev1", 00:32:30.657 "uuid": "29e46e6c-113a-4993-98a6-5714b9ecdc61", 00:32:30.657 "strip_size_kb": 64, 00:32:30.657 "state": "online", 00:32:30.657 "raid_level": "raid5f", 00:32:30.657 "superblock": true, 00:32:30.657 "num_base_bdevs": 4, 00:32:30.657 "num_base_bdevs_discovered": 4, 00:32:30.657 "num_base_bdevs_operational": 4, 00:32:30.657 "process": { 00:32:30.657 "type": "rebuild", 00:32:30.657 "target": "spare", 00:32:30.657 "progress": { 00:32:30.657 "blocks": 145920, 00:32:30.657 "percent": 76 00:32:30.657 } 00:32:30.657 }, 00:32:30.657 "base_bdevs_list": [ 00:32:30.657 { 00:32:30.657 "name": "spare", 00:32:30.657 "uuid": "fce067cb-d0e3-5144-83ed-ac67e0cd497b", 00:32:30.657 "is_configured": true, 00:32:30.657 "data_offset": 2048, 00:32:30.657 "data_size": 63488 00:32:30.657 }, 00:32:30.657 { 00:32:30.657 "name": "BaseBdev2", 00:32:30.657 "uuid": "97dfd639-96e2-57c9-b27a-377ae8612b40", 00:32:30.657 "is_configured": true, 00:32:30.657 "data_offset": 2048, 00:32:30.657 "data_size": 63488 00:32:30.657 }, 00:32:30.657 { 00:32:30.657 "name": "BaseBdev3", 00:32:30.657 "uuid": "6b5def09-3ac4-5622-8780-d45dd6002773", 00:32:30.657 "is_configured": true, 00:32:30.657 "data_offset": 2048, 00:32:30.657 "data_size": 63488 00:32:30.657 }, 00:32:30.657 { 00:32:30.657 "name": "BaseBdev4", 00:32:30.657 "uuid": "a05ab79b-e9fa-58f2-8625-28a3e858755c", 00:32:30.657 "is_configured": true, 00:32:30.657 "data_offset": 2048, 00:32:30.657 "data_size": 63488 00:32:30.657 } 00:32:30.657 ] 00:32:30.657 }' 00:32:30.657 16:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:30.657 16:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:30.657 16:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:30.657 16:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:30.657 16:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:32.039 16:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:32.039 16:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:32.039 16:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:32.039 16:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:32.039 16:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:32.039 16:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:32.039 16:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:32.039 16:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:32.039 16:02:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.039 16:02:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:32.039 16:02:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.039 16:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:32.039 "name": "raid_bdev1", 00:32:32.039 "uuid": "29e46e6c-113a-4993-98a6-5714b9ecdc61", 00:32:32.039 "strip_size_kb": 64, 00:32:32.039 "state": "online", 00:32:32.039 "raid_level": "raid5f", 00:32:32.039 "superblock": true, 00:32:32.039 "num_base_bdevs": 4, 00:32:32.039 "num_base_bdevs_discovered": 4, 00:32:32.039 "num_base_bdevs_operational": 4, 00:32:32.039 "process": { 00:32:32.039 "type": "rebuild", 00:32:32.039 "target": "spare", 00:32:32.039 "progress": { 00:32:32.039 "blocks": 167040, 00:32:32.039 "percent": 87 00:32:32.039 } 00:32:32.039 }, 00:32:32.039 "base_bdevs_list": [ 00:32:32.039 { 00:32:32.039 "name": "spare", 00:32:32.039 "uuid": "fce067cb-d0e3-5144-83ed-ac67e0cd497b", 00:32:32.039 "is_configured": true, 00:32:32.039 "data_offset": 2048, 00:32:32.039 "data_size": 63488 00:32:32.039 }, 00:32:32.039 { 00:32:32.039 "name": "BaseBdev2", 00:32:32.039 "uuid": "97dfd639-96e2-57c9-b27a-377ae8612b40", 00:32:32.039 "is_configured": true, 00:32:32.039 "data_offset": 2048, 00:32:32.039 "data_size": 63488 00:32:32.039 }, 00:32:32.039 { 00:32:32.039 "name": "BaseBdev3", 00:32:32.039 "uuid": "6b5def09-3ac4-5622-8780-d45dd6002773", 00:32:32.039 "is_configured": true, 00:32:32.039 "data_offset": 2048, 00:32:32.039 "data_size": 63488 00:32:32.039 }, 00:32:32.039 { 00:32:32.039 "name": "BaseBdev4", 00:32:32.039 "uuid": "a05ab79b-e9fa-58f2-8625-28a3e858755c", 00:32:32.039 "is_configured": true, 00:32:32.039 "data_offset": 2048, 00:32:32.039 "data_size": 63488 00:32:32.039 } 00:32:32.039 ] 00:32:32.039 }' 00:32:32.039 16:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:32.039 16:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:32.039 16:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:32.039 16:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:32.039 16:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:32.973 16:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:32.973 16:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:32.973 16:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:32.973 16:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:32.973 16:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:32.973 16:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:32.973 16:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:32.973 16:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:32.973 16:02:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.973 16:02:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:32.973 16:02:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.973 16:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:32.973 "name": "raid_bdev1", 00:32:32.973 "uuid": "29e46e6c-113a-4993-98a6-5714b9ecdc61", 00:32:32.973 "strip_size_kb": 64, 00:32:32.973 "state": "online", 00:32:32.973 "raid_level": "raid5f", 00:32:32.973 "superblock": true, 00:32:32.973 "num_base_bdevs": 4, 00:32:32.973 "num_base_bdevs_discovered": 4, 00:32:32.973 "num_base_bdevs_operational": 4, 00:32:32.973 "process": { 00:32:32.973 "type": "rebuild", 00:32:32.973 "target": "spare", 00:32:32.973 "progress": { 00:32:32.973 "blocks": 188160, 00:32:32.973 "percent": 98 00:32:32.973 } 00:32:32.973 }, 00:32:32.973 "base_bdevs_list": [ 00:32:32.973 { 00:32:32.973 "name": "spare", 00:32:32.973 "uuid": "fce067cb-d0e3-5144-83ed-ac67e0cd497b", 00:32:32.973 "is_configured": true, 00:32:32.973 "data_offset": 2048, 00:32:32.973 "data_size": 63488 00:32:32.973 }, 00:32:32.973 { 00:32:32.973 "name": "BaseBdev2", 00:32:32.973 "uuid": "97dfd639-96e2-57c9-b27a-377ae8612b40", 00:32:32.973 "is_configured": true, 00:32:32.973 "data_offset": 2048, 00:32:32.973 "data_size": 63488 00:32:32.973 }, 00:32:32.973 { 00:32:32.973 "name": "BaseBdev3", 00:32:32.973 "uuid": "6b5def09-3ac4-5622-8780-d45dd6002773", 00:32:32.973 "is_configured": true, 00:32:32.973 "data_offset": 2048, 00:32:32.973 "data_size": 63488 00:32:32.973 }, 00:32:32.973 { 00:32:32.973 "name": "BaseBdev4", 00:32:32.973 "uuid": "a05ab79b-e9fa-58f2-8625-28a3e858755c", 00:32:32.973 "is_configured": true, 00:32:32.973 "data_offset": 2048, 00:32:32.973 "data_size": 63488 00:32:32.973 } 00:32:32.973 ] 00:32:32.973 }' 00:32:32.973 16:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:32.973 16:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:32.973 16:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:32.973 16:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:32.973 16:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:32.973 [2024-11-05 16:02:05.322367] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:32:32.973 [2024-11-05 16:02:05.322441] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:32:32.973 [2024-11-05 16:02:05.322579] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:33.908 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:33.908 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:33.908 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:33.908 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:33.908 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:33.908 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:33.908 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:33.908 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:33.908 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.908 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:33.908 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.908 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:33.908 "name": "raid_bdev1", 00:32:33.908 "uuid": "29e46e6c-113a-4993-98a6-5714b9ecdc61", 00:32:33.908 "strip_size_kb": 64, 00:32:33.908 "state": "online", 00:32:33.908 "raid_level": "raid5f", 00:32:33.908 "superblock": true, 00:32:33.908 "num_base_bdevs": 4, 00:32:33.908 "num_base_bdevs_discovered": 4, 00:32:33.908 "num_base_bdevs_operational": 4, 00:32:33.908 "base_bdevs_list": [ 00:32:33.908 { 00:32:33.908 "name": "spare", 00:32:33.908 "uuid": "fce067cb-d0e3-5144-83ed-ac67e0cd497b", 00:32:33.908 "is_configured": true, 00:32:33.908 "data_offset": 2048, 00:32:33.908 "data_size": 63488 00:32:33.908 }, 00:32:33.908 { 00:32:33.908 "name": "BaseBdev2", 00:32:33.908 "uuid": "97dfd639-96e2-57c9-b27a-377ae8612b40", 00:32:33.908 "is_configured": true, 00:32:33.908 "data_offset": 2048, 00:32:33.908 "data_size": 63488 00:32:33.908 }, 00:32:33.908 { 00:32:33.908 "name": "BaseBdev3", 00:32:33.908 "uuid": "6b5def09-3ac4-5622-8780-d45dd6002773", 00:32:33.908 "is_configured": true, 00:32:33.908 "data_offset": 2048, 00:32:33.908 "data_size": 63488 00:32:33.908 }, 00:32:33.908 { 00:32:33.908 "name": "BaseBdev4", 00:32:33.908 "uuid": "a05ab79b-e9fa-58f2-8625-28a3e858755c", 00:32:33.908 "is_configured": true, 00:32:33.908 "data_offset": 2048, 00:32:33.908 "data_size": 63488 00:32:33.908 } 00:32:33.908 ] 00:32:33.908 }' 00:32:33.908 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:33.908 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:32:33.908 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:34.167 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:32:34.167 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:32:34.167 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:34.167 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:34.167 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:34.167 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:34.167 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:34.167 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:34.167 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.167 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:34.167 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:34.167 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.167 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:34.167 "name": "raid_bdev1", 00:32:34.167 "uuid": "29e46e6c-113a-4993-98a6-5714b9ecdc61", 00:32:34.167 "strip_size_kb": 64, 00:32:34.167 "state": "online", 00:32:34.167 "raid_level": "raid5f", 00:32:34.167 "superblock": true, 00:32:34.167 "num_base_bdevs": 4, 00:32:34.167 "num_base_bdevs_discovered": 4, 00:32:34.167 "num_base_bdevs_operational": 4, 00:32:34.167 "base_bdevs_list": [ 00:32:34.167 { 00:32:34.167 "name": "spare", 00:32:34.167 "uuid": "fce067cb-d0e3-5144-83ed-ac67e0cd497b", 00:32:34.167 "is_configured": true, 00:32:34.167 "data_offset": 2048, 00:32:34.167 "data_size": 63488 00:32:34.167 }, 00:32:34.167 { 00:32:34.167 "name": "BaseBdev2", 00:32:34.167 "uuid": "97dfd639-96e2-57c9-b27a-377ae8612b40", 00:32:34.167 "is_configured": true, 00:32:34.167 "data_offset": 2048, 00:32:34.167 "data_size": 63488 00:32:34.167 }, 00:32:34.167 { 00:32:34.167 "name": "BaseBdev3", 00:32:34.167 "uuid": "6b5def09-3ac4-5622-8780-d45dd6002773", 00:32:34.167 "is_configured": true, 00:32:34.167 "data_offset": 2048, 00:32:34.167 "data_size": 63488 00:32:34.167 }, 00:32:34.167 { 00:32:34.167 "name": "BaseBdev4", 00:32:34.167 "uuid": "a05ab79b-e9fa-58f2-8625-28a3e858755c", 00:32:34.167 "is_configured": true, 00:32:34.167 "data_offset": 2048, 00:32:34.167 "data_size": 63488 00:32:34.167 } 00:32:34.167 ] 00:32:34.167 }' 00:32:34.167 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:34.167 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:34.167 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:34.167 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:34.167 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:32:34.167 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:34.167 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:34.167 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:34.167 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:34.167 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:34.167 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:34.167 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:34.167 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:34.167 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:34.167 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:34.167 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.168 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:34.168 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:34.168 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.168 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:34.168 "name": "raid_bdev1", 00:32:34.168 "uuid": "29e46e6c-113a-4993-98a6-5714b9ecdc61", 00:32:34.168 "strip_size_kb": 64, 00:32:34.168 "state": "online", 00:32:34.168 "raid_level": "raid5f", 00:32:34.168 "superblock": true, 00:32:34.168 "num_base_bdevs": 4, 00:32:34.168 "num_base_bdevs_discovered": 4, 00:32:34.168 "num_base_bdevs_operational": 4, 00:32:34.168 "base_bdevs_list": [ 00:32:34.168 { 00:32:34.168 "name": "spare", 00:32:34.168 "uuid": "fce067cb-d0e3-5144-83ed-ac67e0cd497b", 00:32:34.168 "is_configured": true, 00:32:34.168 "data_offset": 2048, 00:32:34.168 "data_size": 63488 00:32:34.168 }, 00:32:34.168 { 00:32:34.168 "name": "BaseBdev2", 00:32:34.168 "uuid": "97dfd639-96e2-57c9-b27a-377ae8612b40", 00:32:34.168 "is_configured": true, 00:32:34.168 "data_offset": 2048, 00:32:34.168 "data_size": 63488 00:32:34.168 }, 00:32:34.168 { 00:32:34.168 "name": "BaseBdev3", 00:32:34.168 "uuid": "6b5def09-3ac4-5622-8780-d45dd6002773", 00:32:34.168 "is_configured": true, 00:32:34.168 "data_offset": 2048, 00:32:34.168 "data_size": 63488 00:32:34.168 }, 00:32:34.168 { 00:32:34.168 "name": "BaseBdev4", 00:32:34.168 "uuid": "a05ab79b-e9fa-58f2-8625-28a3e858755c", 00:32:34.168 "is_configured": true, 00:32:34.168 "data_offset": 2048, 00:32:34.168 "data_size": 63488 00:32:34.168 } 00:32:34.168 ] 00:32:34.168 }' 00:32:34.168 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:34.168 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:34.426 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:34.426 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.426 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:34.426 [2024-11-05 16:02:06.756023] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:34.426 [2024-11-05 16:02:06.756052] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:34.426 [2024-11-05 16:02:06.756132] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:34.426 [2024-11-05 16:02:06.756239] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:34.426 [2024-11-05 16:02:06.756250] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:32:34.426 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.426 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:34.426 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:32:34.426 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.426 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:34.426 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.426 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:32:34.426 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:32:34.426 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:32:34.426 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:32:34.426 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:32:34.426 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:32:34.426 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:34.426 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:34.426 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:34.426 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:32:34.426 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:34.426 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:34.426 16:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:32:34.685 /dev/nbd0 00:32:34.685 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:34.685 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:34.685 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:32:34.685 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:32:34.685 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:32:34.685 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:32:34.685 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:32:34.685 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:32:34.685 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:32:34.685 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:32:34.685 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:34.685 1+0 records in 00:32:34.685 1+0 records out 00:32:34.685 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243775 s, 16.8 MB/s 00:32:34.685 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:34.685 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:32:34.685 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:34.685 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:32:34.685 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:32:34.685 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:34.685 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:34.685 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:32:34.943 /dev/nbd1 00:32:34.943 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:34.943 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:34.943 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:32:34.943 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:32:34.943 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:32:34.943 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:32:34.943 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:32:34.943 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:32:34.943 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:32:34.943 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:32:34.943 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:34.943 1+0 records in 00:32:34.943 1+0 records out 00:32:34.943 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259261 s, 15.8 MB/s 00:32:34.943 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:34.943 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:32:34.943 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:34.943 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:32:34.943 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:32:34.943 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:34.943 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:34.943 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:32:34.943 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:32:34.943 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:32:34.943 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:34.943 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:34.943 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:32:34.943 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:34.943 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:32:35.202 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:35.202 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:35.202 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:35.202 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:35.202 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:35.202 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:35.202 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:32:35.202 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:32:35.202 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:35.202 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:32:35.460 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:35.460 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:35.460 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:35.460 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:35.460 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:35.460 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:35.460 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:32:35.460 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:32:35.460 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:32:35.460 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:32:35.460 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.460 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:35.460 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.460 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:32:35.460 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.460 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:35.460 [2024-11-05 16:02:07.777470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:35.460 [2024-11-05 16:02:07.777522] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:35.460 [2024-11-05 16:02:07.777542] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:32:35.460 [2024-11-05 16:02:07.777551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:35.460 [2024-11-05 16:02:07.779818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:35.460 [2024-11-05 16:02:07.779865] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:35.460 [2024-11-05 16:02:07.779947] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:35.460 [2024-11-05 16:02:07.779993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:35.460 [2024-11-05 16:02:07.780122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:35.460 [2024-11-05 16:02:07.780205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:35.460 [2024-11-05 16:02:07.780274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:35.460 spare 00:32:35.460 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.460 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:32:35.460 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.460 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:35.718 [2024-11-05 16:02:07.880364] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:32:35.718 [2024-11-05 16:02:07.880538] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:32:35.719 [2024-11-05 16:02:07.880863] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:32:35.719 [2024-11-05 16:02:07.885472] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:32:35.719 [2024-11-05 16:02:07.885556] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:32:35.719 [2024-11-05 16:02:07.885829] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:35.719 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.719 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:32:35.719 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:35.719 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:35.719 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:35.719 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:35.719 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:35.719 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:35.719 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:35.719 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:35.719 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:35.719 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:35.719 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.719 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:35.719 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:35.719 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.719 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:35.719 "name": "raid_bdev1", 00:32:35.719 "uuid": "29e46e6c-113a-4993-98a6-5714b9ecdc61", 00:32:35.719 "strip_size_kb": 64, 00:32:35.719 "state": "online", 00:32:35.719 "raid_level": "raid5f", 00:32:35.719 "superblock": true, 00:32:35.719 "num_base_bdevs": 4, 00:32:35.719 "num_base_bdevs_discovered": 4, 00:32:35.719 "num_base_bdevs_operational": 4, 00:32:35.719 "base_bdevs_list": [ 00:32:35.719 { 00:32:35.719 "name": "spare", 00:32:35.719 "uuid": "fce067cb-d0e3-5144-83ed-ac67e0cd497b", 00:32:35.719 "is_configured": true, 00:32:35.719 "data_offset": 2048, 00:32:35.719 "data_size": 63488 00:32:35.719 }, 00:32:35.719 { 00:32:35.719 "name": "BaseBdev2", 00:32:35.719 "uuid": "97dfd639-96e2-57c9-b27a-377ae8612b40", 00:32:35.719 "is_configured": true, 00:32:35.719 "data_offset": 2048, 00:32:35.719 "data_size": 63488 00:32:35.719 }, 00:32:35.719 { 00:32:35.719 "name": "BaseBdev3", 00:32:35.719 "uuid": "6b5def09-3ac4-5622-8780-d45dd6002773", 00:32:35.719 "is_configured": true, 00:32:35.719 "data_offset": 2048, 00:32:35.719 "data_size": 63488 00:32:35.719 }, 00:32:35.719 { 00:32:35.719 "name": "BaseBdev4", 00:32:35.719 "uuid": "a05ab79b-e9fa-58f2-8625-28a3e858755c", 00:32:35.719 "is_configured": true, 00:32:35.719 "data_offset": 2048, 00:32:35.719 "data_size": 63488 00:32:35.719 } 00:32:35.719 ] 00:32:35.719 }' 00:32:35.719 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:35.719 16:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:35.977 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:35.977 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:35.977 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:35.977 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:35.977 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:35.977 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:35.977 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:35.977 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.977 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:35.977 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.977 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:35.977 "name": "raid_bdev1", 00:32:35.977 "uuid": "29e46e6c-113a-4993-98a6-5714b9ecdc61", 00:32:35.977 "strip_size_kb": 64, 00:32:35.977 "state": "online", 00:32:35.977 "raid_level": "raid5f", 00:32:35.977 "superblock": true, 00:32:35.977 "num_base_bdevs": 4, 00:32:35.977 "num_base_bdevs_discovered": 4, 00:32:35.977 "num_base_bdevs_operational": 4, 00:32:35.977 "base_bdevs_list": [ 00:32:35.977 { 00:32:35.977 "name": "spare", 00:32:35.977 "uuid": "fce067cb-d0e3-5144-83ed-ac67e0cd497b", 00:32:35.977 "is_configured": true, 00:32:35.977 "data_offset": 2048, 00:32:35.977 "data_size": 63488 00:32:35.977 }, 00:32:35.977 { 00:32:35.977 "name": "BaseBdev2", 00:32:35.977 "uuid": "97dfd639-96e2-57c9-b27a-377ae8612b40", 00:32:35.977 "is_configured": true, 00:32:35.977 "data_offset": 2048, 00:32:35.977 "data_size": 63488 00:32:35.977 }, 00:32:35.977 { 00:32:35.977 "name": "BaseBdev3", 00:32:35.977 "uuid": "6b5def09-3ac4-5622-8780-d45dd6002773", 00:32:35.977 "is_configured": true, 00:32:35.977 "data_offset": 2048, 00:32:35.977 "data_size": 63488 00:32:35.977 }, 00:32:35.977 { 00:32:35.977 "name": "BaseBdev4", 00:32:35.977 "uuid": "a05ab79b-e9fa-58f2-8625-28a3e858755c", 00:32:35.977 "is_configured": true, 00:32:35.977 "data_offset": 2048, 00:32:35.978 "data_size": 63488 00:32:35.978 } 00:32:35.978 ] 00:32:35.978 }' 00:32:35.978 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:35.978 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:35.978 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:35.978 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:35.978 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:32:35.978 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:35.978 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.978 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:35.978 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.978 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:32:35.978 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:32:35.978 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.978 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:35.978 [2024-11-05 16:02:08.331157] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:35.978 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.978 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:35.978 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:35.978 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:35.978 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:35.978 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:35.978 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:35.978 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:35.978 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:35.978 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:35.978 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:35.978 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:35.978 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:35.978 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.978 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:35.978 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.978 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:35.978 "name": "raid_bdev1", 00:32:35.978 "uuid": "29e46e6c-113a-4993-98a6-5714b9ecdc61", 00:32:35.978 "strip_size_kb": 64, 00:32:35.978 "state": "online", 00:32:35.978 "raid_level": "raid5f", 00:32:35.978 "superblock": true, 00:32:35.978 "num_base_bdevs": 4, 00:32:35.978 "num_base_bdevs_discovered": 3, 00:32:35.978 "num_base_bdevs_operational": 3, 00:32:35.978 "base_bdevs_list": [ 00:32:35.978 { 00:32:35.978 "name": null, 00:32:35.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:35.978 "is_configured": false, 00:32:35.978 "data_offset": 0, 00:32:35.978 "data_size": 63488 00:32:35.978 }, 00:32:35.978 { 00:32:35.978 "name": "BaseBdev2", 00:32:35.978 "uuid": "97dfd639-96e2-57c9-b27a-377ae8612b40", 00:32:35.978 "is_configured": true, 00:32:35.978 "data_offset": 2048, 00:32:35.978 "data_size": 63488 00:32:35.978 }, 00:32:35.978 { 00:32:35.978 "name": "BaseBdev3", 00:32:35.978 "uuid": "6b5def09-3ac4-5622-8780-d45dd6002773", 00:32:35.978 "is_configured": true, 00:32:35.978 "data_offset": 2048, 00:32:35.978 "data_size": 63488 00:32:35.978 }, 00:32:35.978 { 00:32:35.978 "name": "BaseBdev4", 00:32:35.978 "uuid": "a05ab79b-e9fa-58f2-8625-28a3e858755c", 00:32:35.978 "is_configured": true, 00:32:35.978 "data_offset": 2048, 00:32:35.978 "data_size": 63488 00:32:35.978 } 00:32:35.978 ] 00:32:35.978 }' 00:32:35.978 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:35.978 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:36.236 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:32:36.236 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.236 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:36.236 [2024-11-05 16:02:08.651257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:36.236 [2024-11-05 16:02:08.651411] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:32:36.236 [2024-11-05 16:02:08.651429] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:36.236 [2024-11-05 16:02:08.651461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:36.495 [2024-11-05 16:02:08.660786] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:32:36.495 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.495 16:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:32:36.495 [2024-11-05 16:02:08.667302] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:37.428 16:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:37.428 16:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:37.428 16:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:37.428 16:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:37.428 16:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:37.428 16:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:37.428 16:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:37.429 16:02:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.429 16:02:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.429 16:02:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.429 16:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:37.429 "name": "raid_bdev1", 00:32:37.429 "uuid": "29e46e6c-113a-4993-98a6-5714b9ecdc61", 00:32:37.429 "strip_size_kb": 64, 00:32:37.429 "state": "online", 00:32:37.429 "raid_level": "raid5f", 00:32:37.429 "superblock": true, 00:32:37.429 "num_base_bdevs": 4, 00:32:37.429 "num_base_bdevs_discovered": 4, 00:32:37.429 "num_base_bdevs_operational": 4, 00:32:37.429 "process": { 00:32:37.429 "type": "rebuild", 00:32:37.429 "target": "spare", 00:32:37.429 "progress": { 00:32:37.429 "blocks": 19200, 00:32:37.429 "percent": 10 00:32:37.429 } 00:32:37.429 }, 00:32:37.429 "base_bdevs_list": [ 00:32:37.429 { 00:32:37.429 "name": "spare", 00:32:37.429 "uuid": "fce067cb-d0e3-5144-83ed-ac67e0cd497b", 00:32:37.429 "is_configured": true, 00:32:37.429 "data_offset": 2048, 00:32:37.429 "data_size": 63488 00:32:37.429 }, 00:32:37.429 { 00:32:37.429 "name": "BaseBdev2", 00:32:37.429 "uuid": "97dfd639-96e2-57c9-b27a-377ae8612b40", 00:32:37.429 "is_configured": true, 00:32:37.429 "data_offset": 2048, 00:32:37.429 "data_size": 63488 00:32:37.429 }, 00:32:37.429 { 00:32:37.429 "name": "BaseBdev3", 00:32:37.429 "uuid": "6b5def09-3ac4-5622-8780-d45dd6002773", 00:32:37.429 "is_configured": true, 00:32:37.429 "data_offset": 2048, 00:32:37.429 "data_size": 63488 00:32:37.429 }, 00:32:37.429 { 00:32:37.429 "name": "BaseBdev4", 00:32:37.429 "uuid": "a05ab79b-e9fa-58f2-8625-28a3e858755c", 00:32:37.429 "is_configured": true, 00:32:37.429 "data_offset": 2048, 00:32:37.429 "data_size": 63488 00:32:37.429 } 00:32:37.429 ] 00:32:37.429 }' 00:32:37.429 16:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:37.429 16:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:37.429 16:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:37.429 16:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:37.429 16:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:32:37.429 16:02:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.429 16:02:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.429 [2024-11-05 16:02:09.756249] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:37.429 [2024-11-05 16:02:09.774468] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:37.429 [2024-11-05 16:02:09.774608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:37.429 [2024-11-05 16:02:09.774662] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:37.429 [2024-11-05 16:02:09.774684] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:37.429 16:02:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.429 16:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:37.429 16:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:37.429 16:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:37.429 16:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:37.429 16:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:37.429 16:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:37.429 16:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:37.429 16:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:37.429 16:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:37.429 16:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:37.429 16:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:37.429 16:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:37.429 16:02:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.429 16:02:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.429 16:02:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.429 16:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:37.429 "name": "raid_bdev1", 00:32:37.429 "uuid": "29e46e6c-113a-4993-98a6-5714b9ecdc61", 00:32:37.429 "strip_size_kb": 64, 00:32:37.429 "state": "online", 00:32:37.429 "raid_level": "raid5f", 00:32:37.429 "superblock": true, 00:32:37.429 "num_base_bdevs": 4, 00:32:37.429 "num_base_bdevs_discovered": 3, 00:32:37.429 "num_base_bdevs_operational": 3, 00:32:37.429 "base_bdevs_list": [ 00:32:37.429 { 00:32:37.429 "name": null, 00:32:37.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:37.429 "is_configured": false, 00:32:37.429 "data_offset": 0, 00:32:37.429 "data_size": 63488 00:32:37.429 }, 00:32:37.429 { 00:32:37.429 "name": "BaseBdev2", 00:32:37.429 "uuid": "97dfd639-96e2-57c9-b27a-377ae8612b40", 00:32:37.429 "is_configured": true, 00:32:37.429 "data_offset": 2048, 00:32:37.429 "data_size": 63488 00:32:37.429 }, 00:32:37.429 { 00:32:37.429 "name": "BaseBdev3", 00:32:37.429 "uuid": "6b5def09-3ac4-5622-8780-d45dd6002773", 00:32:37.429 "is_configured": true, 00:32:37.429 "data_offset": 2048, 00:32:37.429 "data_size": 63488 00:32:37.429 }, 00:32:37.429 { 00:32:37.429 "name": "BaseBdev4", 00:32:37.429 "uuid": "a05ab79b-e9fa-58f2-8625-28a3e858755c", 00:32:37.429 "is_configured": true, 00:32:37.429 "data_offset": 2048, 00:32:37.429 "data_size": 63488 00:32:37.429 } 00:32:37.429 ] 00:32:37.429 }' 00:32:37.429 16:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:37.429 16:02:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.995 16:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:32:37.995 16:02:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.995 16:02:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.995 [2024-11-05 16:02:10.107357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:37.995 [2024-11-05 16:02:10.107414] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:37.995 [2024-11-05 16:02:10.107437] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:32:37.995 [2024-11-05 16:02:10.107448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:37.995 [2024-11-05 16:02:10.107854] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:37.995 [2024-11-05 16:02:10.107869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:37.995 [2024-11-05 16:02:10.107941] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:37.995 [2024-11-05 16:02:10.107954] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:32:37.995 [2024-11-05 16:02:10.107962] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:37.995 [2024-11-05 16:02:10.107982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:37.995 [2024-11-05 16:02:10.115438] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:32:37.995 spare 00:32:37.995 16:02:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.995 16:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:32:37.995 [2024-11-05 16:02:10.120480] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:38.930 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:38.930 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:38.930 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:38.930 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:38.930 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:38.930 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:38.930 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.930 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.930 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:38.930 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.930 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:38.930 "name": "raid_bdev1", 00:32:38.930 "uuid": "29e46e6c-113a-4993-98a6-5714b9ecdc61", 00:32:38.930 "strip_size_kb": 64, 00:32:38.930 "state": "online", 00:32:38.930 "raid_level": "raid5f", 00:32:38.930 "superblock": true, 00:32:38.930 "num_base_bdevs": 4, 00:32:38.930 "num_base_bdevs_discovered": 4, 00:32:38.930 "num_base_bdevs_operational": 4, 00:32:38.930 "process": { 00:32:38.930 "type": "rebuild", 00:32:38.930 "target": "spare", 00:32:38.930 "progress": { 00:32:38.930 "blocks": 17280, 00:32:38.930 "percent": 9 00:32:38.930 } 00:32:38.930 }, 00:32:38.930 "base_bdevs_list": [ 00:32:38.930 { 00:32:38.930 "name": "spare", 00:32:38.930 "uuid": "fce067cb-d0e3-5144-83ed-ac67e0cd497b", 00:32:38.930 "is_configured": true, 00:32:38.930 "data_offset": 2048, 00:32:38.930 "data_size": 63488 00:32:38.930 }, 00:32:38.930 { 00:32:38.930 "name": "BaseBdev2", 00:32:38.930 "uuid": "97dfd639-96e2-57c9-b27a-377ae8612b40", 00:32:38.930 "is_configured": true, 00:32:38.930 "data_offset": 2048, 00:32:38.930 "data_size": 63488 00:32:38.930 }, 00:32:38.930 { 00:32:38.930 "name": "BaseBdev3", 00:32:38.930 "uuid": "6b5def09-3ac4-5622-8780-d45dd6002773", 00:32:38.930 "is_configured": true, 00:32:38.930 "data_offset": 2048, 00:32:38.930 "data_size": 63488 00:32:38.930 }, 00:32:38.930 { 00:32:38.930 "name": "BaseBdev4", 00:32:38.930 "uuid": "a05ab79b-e9fa-58f2-8625-28a3e858755c", 00:32:38.930 "is_configured": true, 00:32:38.930 "data_offset": 2048, 00:32:38.930 "data_size": 63488 00:32:38.930 } 00:32:38.930 ] 00:32:38.930 }' 00:32:38.930 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:38.930 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:38.930 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:38.930 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:38.930 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:32:38.930 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.930 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.930 [2024-11-05 16:02:11.217132] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:38.930 [2024-11-05 16:02:11.227322] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:38.930 [2024-11-05 16:02:11.227366] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:38.930 [2024-11-05 16:02:11.227382] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:38.930 [2024-11-05 16:02:11.227387] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:38.930 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.930 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:38.930 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:38.930 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:38.930 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:38.930 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:38.930 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:38.930 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:38.930 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:38.930 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:38.930 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:38.930 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:38.930 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.930 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.930 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:38.930 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.930 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:38.930 "name": "raid_bdev1", 00:32:38.930 "uuid": "29e46e6c-113a-4993-98a6-5714b9ecdc61", 00:32:38.930 "strip_size_kb": 64, 00:32:38.930 "state": "online", 00:32:38.930 "raid_level": "raid5f", 00:32:38.930 "superblock": true, 00:32:38.930 "num_base_bdevs": 4, 00:32:38.930 "num_base_bdevs_discovered": 3, 00:32:38.930 "num_base_bdevs_operational": 3, 00:32:38.930 "base_bdevs_list": [ 00:32:38.930 { 00:32:38.930 "name": null, 00:32:38.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:38.930 "is_configured": false, 00:32:38.930 "data_offset": 0, 00:32:38.930 "data_size": 63488 00:32:38.930 }, 00:32:38.930 { 00:32:38.930 "name": "BaseBdev2", 00:32:38.930 "uuid": "97dfd639-96e2-57c9-b27a-377ae8612b40", 00:32:38.930 "is_configured": true, 00:32:38.930 "data_offset": 2048, 00:32:38.930 "data_size": 63488 00:32:38.930 }, 00:32:38.930 { 00:32:38.930 "name": "BaseBdev3", 00:32:38.930 "uuid": "6b5def09-3ac4-5622-8780-d45dd6002773", 00:32:38.930 "is_configured": true, 00:32:38.930 "data_offset": 2048, 00:32:38.930 "data_size": 63488 00:32:38.930 }, 00:32:38.930 { 00:32:38.930 "name": "BaseBdev4", 00:32:38.930 "uuid": "a05ab79b-e9fa-58f2-8625-28a3e858755c", 00:32:38.930 "is_configured": true, 00:32:38.930 "data_offset": 2048, 00:32:38.930 "data_size": 63488 00:32:38.930 } 00:32:38.930 ] 00:32:38.930 }' 00:32:38.930 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:38.930 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.189 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:39.189 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:39.189 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:39.189 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:39.189 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:39.189 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:39.189 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:39.189 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.189 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.189 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.189 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:39.189 "name": "raid_bdev1", 00:32:39.189 "uuid": "29e46e6c-113a-4993-98a6-5714b9ecdc61", 00:32:39.189 "strip_size_kb": 64, 00:32:39.189 "state": "online", 00:32:39.189 "raid_level": "raid5f", 00:32:39.189 "superblock": true, 00:32:39.189 "num_base_bdevs": 4, 00:32:39.189 "num_base_bdevs_discovered": 3, 00:32:39.189 "num_base_bdevs_operational": 3, 00:32:39.189 "base_bdevs_list": [ 00:32:39.189 { 00:32:39.189 "name": null, 00:32:39.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:39.189 "is_configured": false, 00:32:39.189 "data_offset": 0, 00:32:39.189 "data_size": 63488 00:32:39.189 }, 00:32:39.189 { 00:32:39.189 "name": "BaseBdev2", 00:32:39.189 "uuid": "97dfd639-96e2-57c9-b27a-377ae8612b40", 00:32:39.189 "is_configured": true, 00:32:39.189 "data_offset": 2048, 00:32:39.189 "data_size": 63488 00:32:39.189 }, 00:32:39.189 { 00:32:39.189 "name": "BaseBdev3", 00:32:39.189 "uuid": "6b5def09-3ac4-5622-8780-d45dd6002773", 00:32:39.189 "is_configured": true, 00:32:39.189 "data_offset": 2048, 00:32:39.189 "data_size": 63488 00:32:39.189 }, 00:32:39.189 { 00:32:39.189 "name": "BaseBdev4", 00:32:39.189 "uuid": "a05ab79b-e9fa-58f2-8625-28a3e858755c", 00:32:39.189 "is_configured": true, 00:32:39.189 "data_offset": 2048, 00:32:39.189 "data_size": 63488 00:32:39.189 } 00:32:39.189 ] 00:32:39.189 }' 00:32:39.189 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:39.447 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:39.447 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:39.447 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:39.447 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:32:39.447 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.447 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.447 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.447 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:39.447 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.447 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.447 [2024-11-05 16:02:11.655345] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:39.447 [2024-11-05 16:02:11.655474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:39.447 [2024-11-05 16:02:11.655497] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:32:39.447 [2024-11-05 16:02:11.655505] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:39.447 [2024-11-05 16:02:11.655878] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:39.447 [2024-11-05 16:02:11.655895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:39.447 [2024-11-05 16:02:11.655957] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:32:39.447 [2024-11-05 16:02:11.655968] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:39.447 [2024-11-05 16:02:11.655976] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:39.447 [2024-11-05 16:02:11.655983] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:32:39.447 BaseBdev1 00:32:39.447 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.447 16:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:32:40.381 16:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:40.381 16:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:40.381 16:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:40.381 16:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:40.381 16:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:40.381 16:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:40.381 16:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:40.381 16:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:40.381 16:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:40.381 16:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:40.381 16:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:40.381 16:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:40.381 16:02:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.381 16:02:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:40.381 16:02:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.381 16:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:40.381 "name": "raid_bdev1", 00:32:40.381 "uuid": "29e46e6c-113a-4993-98a6-5714b9ecdc61", 00:32:40.381 "strip_size_kb": 64, 00:32:40.381 "state": "online", 00:32:40.381 "raid_level": "raid5f", 00:32:40.381 "superblock": true, 00:32:40.381 "num_base_bdevs": 4, 00:32:40.381 "num_base_bdevs_discovered": 3, 00:32:40.381 "num_base_bdevs_operational": 3, 00:32:40.381 "base_bdevs_list": [ 00:32:40.381 { 00:32:40.381 "name": null, 00:32:40.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:40.381 "is_configured": false, 00:32:40.381 "data_offset": 0, 00:32:40.381 "data_size": 63488 00:32:40.381 }, 00:32:40.381 { 00:32:40.381 "name": "BaseBdev2", 00:32:40.381 "uuid": "97dfd639-96e2-57c9-b27a-377ae8612b40", 00:32:40.381 "is_configured": true, 00:32:40.381 "data_offset": 2048, 00:32:40.381 "data_size": 63488 00:32:40.381 }, 00:32:40.381 { 00:32:40.381 "name": "BaseBdev3", 00:32:40.381 "uuid": "6b5def09-3ac4-5622-8780-d45dd6002773", 00:32:40.381 "is_configured": true, 00:32:40.381 "data_offset": 2048, 00:32:40.381 "data_size": 63488 00:32:40.381 }, 00:32:40.381 { 00:32:40.381 "name": "BaseBdev4", 00:32:40.381 "uuid": "a05ab79b-e9fa-58f2-8625-28a3e858755c", 00:32:40.381 "is_configured": true, 00:32:40.381 "data_offset": 2048, 00:32:40.381 "data_size": 63488 00:32:40.381 } 00:32:40.381 ] 00:32:40.381 }' 00:32:40.381 16:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:40.381 16:02:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:40.639 16:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:40.639 16:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:40.639 16:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:40.639 16:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:40.639 16:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:40.639 16:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:40.639 16:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:40.639 16:02:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.639 16:02:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:40.639 16:02:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.639 16:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:40.639 "name": "raid_bdev1", 00:32:40.640 "uuid": "29e46e6c-113a-4993-98a6-5714b9ecdc61", 00:32:40.640 "strip_size_kb": 64, 00:32:40.640 "state": "online", 00:32:40.640 "raid_level": "raid5f", 00:32:40.640 "superblock": true, 00:32:40.640 "num_base_bdevs": 4, 00:32:40.640 "num_base_bdevs_discovered": 3, 00:32:40.640 "num_base_bdevs_operational": 3, 00:32:40.640 "base_bdevs_list": [ 00:32:40.640 { 00:32:40.640 "name": null, 00:32:40.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:40.640 "is_configured": false, 00:32:40.640 "data_offset": 0, 00:32:40.640 "data_size": 63488 00:32:40.640 }, 00:32:40.640 { 00:32:40.640 "name": "BaseBdev2", 00:32:40.640 "uuid": "97dfd639-96e2-57c9-b27a-377ae8612b40", 00:32:40.640 "is_configured": true, 00:32:40.640 "data_offset": 2048, 00:32:40.640 "data_size": 63488 00:32:40.640 }, 00:32:40.640 { 00:32:40.640 "name": "BaseBdev3", 00:32:40.640 "uuid": "6b5def09-3ac4-5622-8780-d45dd6002773", 00:32:40.640 "is_configured": true, 00:32:40.640 "data_offset": 2048, 00:32:40.640 "data_size": 63488 00:32:40.640 }, 00:32:40.640 { 00:32:40.640 "name": "BaseBdev4", 00:32:40.640 "uuid": "a05ab79b-e9fa-58f2-8625-28a3e858755c", 00:32:40.640 "is_configured": true, 00:32:40.640 "data_offset": 2048, 00:32:40.640 "data_size": 63488 00:32:40.640 } 00:32:40.640 ] 00:32:40.640 }' 00:32:40.640 16:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:40.640 16:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:40.640 16:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:40.898 16:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:40.898 16:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:40.898 16:02:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:32:40.898 16:02:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:40.898 16:02:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:40.898 16:02:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:40.898 16:02:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:40.898 16:02:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:40.898 16:02:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:40.898 16:02:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.898 16:02:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:40.898 [2024-11-05 16:02:13.079633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:40.898 [2024-11-05 16:02:13.079752] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:40.898 [2024-11-05 16:02:13.079764] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:40.898 request: 00:32:40.898 { 00:32:40.898 "base_bdev": "BaseBdev1", 00:32:40.898 "raid_bdev": "raid_bdev1", 00:32:40.898 "method": "bdev_raid_add_base_bdev", 00:32:40.898 "req_id": 1 00:32:40.898 } 00:32:40.898 Got JSON-RPC error response 00:32:40.898 response: 00:32:40.898 { 00:32:40.898 "code": -22, 00:32:40.898 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:32:40.898 } 00:32:40.898 16:02:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:40.898 16:02:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:32:40.898 16:02:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:40.898 16:02:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:40.898 16:02:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:40.898 16:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:32:41.834 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:41.834 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:41.834 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:41.834 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:41.834 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:41.834 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:41.834 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:41.834 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:41.834 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:41.834 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:41.834 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:41.834 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:41.834 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.834 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.834 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.834 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:41.834 "name": "raid_bdev1", 00:32:41.834 "uuid": "29e46e6c-113a-4993-98a6-5714b9ecdc61", 00:32:41.834 "strip_size_kb": 64, 00:32:41.834 "state": "online", 00:32:41.834 "raid_level": "raid5f", 00:32:41.834 "superblock": true, 00:32:41.834 "num_base_bdevs": 4, 00:32:41.834 "num_base_bdevs_discovered": 3, 00:32:41.834 "num_base_bdevs_operational": 3, 00:32:41.834 "base_bdevs_list": [ 00:32:41.834 { 00:32:41.834 "name": null, 00:32:41.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:41.834 "is_configured": false, 00:32:41.834 "data_offset": 0, 00:32:41.834 "data_size": 63488 00:32:41.834 }, 00:32:41.834 { 00:32:41.834 "name": "BaseBdev2", 00:32:41.834 "uuid": "97dfd639-96e2-57c9-b27a-377ae8612b40", 00:32:41.834 "is_configured": true, 00:32:41.834 "data_offset": 2048, 00:32:41.834 "data_size": 63488 00:32:41.834 }, 00:32:41.834 { 00:32:41.834 "name": "BaseBdev3", 00:32:41.834 "uuid": "6b5def09-3ac4-5622-8780-d45dd6002773", 00:32:41.834 "is_configured": true, 00:32:41.834 "data_offset": 2048, 00:32:41.834 "data_size": 63488 00:32:41.834 }, 00:32:41.834 { 00:32:41.834 "name": "BaseBdev4", 00:32:41.834 "uuid": "a05ab79b-e9fa-58f2-8625-28a3e858755c", 00:32:41.834 "is_configured": true, 00:32:41.834 "data_offset": 2048, 00:32:41.834 "data_size": 63488 00:32:41.834 } 00:32:41.834 ] 00:32:41.834 }' 00:32:41.834 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:41.834 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:42.092 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:42.092 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:42.092 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:42.092 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:42.092 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:42.092 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:42.092 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:42.092 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.092 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:42.092 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.092 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:42.092 "name": "raid_bdev1", 00:32:42.092 "uuid": "29e46e6c-113a-4993-98a6-5714b9ecdc61", 00:32:42.092 "strip_size_kb": 64, 00:32:42.092 "state": "online", 00:32:42.092 "raid_level": "raid5f", 00:32:42.092 "superblock": true, 00:32:42.092 "num_base_bdevs": 4, 00:32:42.092 "num_base_bdevs_discovered": 3, 00:32:42.092 "num_base_bdevs_operational": 3, 00:32:42.092 "base_bdevs_list": [ 00:32:42.092 { 00:32:42.092 "name": null, 00:32:42.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:42.092 "is_configured": false, 00:32:42.092 "data_offset": 0, 00:32:42.092 "data_size": 63488 00:32:42.092 }, 00:32:42.092 { 00:32:42.092 "name": "BaseBdev2", 00:32:42.092 "uuid": "97dfd639-96e2-57c9-b27a-377ae8612b40", 00:32:42.092 "is_configured": true, 00:32:42.092 "data_offset": 2048, 00:32:42.092 "data_size": 63488 00:32:42.092 }, 00:32:42.092 { 00:32:42.092 "name": "BaseBdev3", 00:32:42.092 "uuid": "6b5def09-3ac4-5622-8780-d45dd6002773", 00:32:42.092 "is_configured": true, 00:32:42.092 "data_offset": 2048, 00:32:42.092 "data_size": 63488 00:32:42.092 }, 00:32:42.092 { 00:32:42.092 "name": "BaseBdev4", 00:32:42.092 "uuid": "a05ab79b-e9fa-58f2-8625-28a3e858755c", 00:32:42.092 "is_configured": true, 00:32:42.092 "data_offset": 2048, 00:32:42.092 "data_size": 63488 00:32:42.092 } 00:32:42.092 ] 00:32:42.092 }' 00:32:42.092 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:42.092 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:42.092 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:42.351 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:42.351 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82373 00:32:42.351 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 82373 ']' 00:32:42.351 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 82373 00:32:42.351 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:32:42.351 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:42.351 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82373 00:32:42.351 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:42.351 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:42.351 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82373' 00:32:42.351 killing process with pid 82373 00:32:42.351 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 82373 00:32:42.351 Received shutdown signal, test time was about 60.000000 seconds 00:32:42.351 00:32:42.351 Latency(us) 00:32:42.351 [2024-11-05T16:02:14.766Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:42.351 [2024-11-05T16:02:14.766Z] =================================================================================================================== 00:32:42.351 [2024-11-05T16:02:14.766Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:42.351 [2024-11-05 16:02:14.530122] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:42.351 16:02:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 82373 00:32:42.351 [2024-11-05 16:02:14.530215] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:42.351 [2024-11-05 16:02:14.530272] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:42.351 [2024-11-05 16:02:14.530281] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:32:42.351 [2024-11-05 16:02:14.766178] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:42.917 ************************************ 00:32:42.917 END TEST raid5f_rebuild_test_sb 00:32:42.917 ************************************ 00:32:42.917 16:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:32:42.917 00:32:42.917 real 0m24.393s 00:32:42.917 user 0m29.542s 00:32:42.917 sys 0m2.131s 00:32:42.917 16:02:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:42.917 16:02:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:43.175 16:02:15 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:32:43.175 16:02:15 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:32:43.175 16:02:15 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:32:43.175 16:02:15 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:43.175 16:02:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:43.175 ************************************ 00:32:43.175 START TEST raid_state_function_test_sb_4k 00:32:43.175 ************************************ 00:32:43.175 16:02:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:32:43.176 16:02:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:32:43.176 16:02:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:32:43.176 16:02:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:32:43.176 16:02:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:32:43.176 16:02:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:32:43.176 16:02:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:43.176 16:02:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:32:43.176 16:02:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:43.176 16:02:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:43.176 16:02:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:32:43.176 16:02:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:43.176 16:02:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:43.176 16:02:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:43.176 16:02:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:32:43.176 16:02:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:32:43.176 16:02:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:32:43.176 16:02:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:32:43.176 16:02:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:32:43.176 16:02:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:32:43.176 16:02:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:32:43.176 Process raid pid: 83169 00:32:43.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:43.176 16:02:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:32:43.176 16:02:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:32:43.176 16:02:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=83169 00:32:43.176 16:02:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83169' 00:32:43.176 16:02:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 83169 00:32:43.176 16:02:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 83169 ']' 00:32:43.176 16:02:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:43.176 16:02:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:43.176 16:02:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:43.176 16:02:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:43.176 16:02:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:32:43.176 16:02:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:43.176 [2024-11-05 16:02:15.443414] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:32:43.176 [2024-11-05 16:02:15.443501] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:43.434 [2024-11-05 16:02:15.592885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:43.434 [2024-11-05 16:02:15.692656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:43.434 [2024-11-05 16:02:15.829497] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:43.434 [2024-11-05 16:02:15.829528] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:43.999 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:43.999 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:32:43.999 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:44.000 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.000 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:44.000 [2024-11-05 16:02:16.298872] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:44.000 [2024-11-05 16:02:16.298917] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:44.000 [2024-11-05 16:02:16.298927] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:44.000 [2024-11-05 16:02:16.298936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:44.000 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.000 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:44.000 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:44.000 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:44.000 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:44.000 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:44.000 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:44.000 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:44.000 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:44.000 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:44.000 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:44.000 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:44.000 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:44.000 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.000 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:44.000 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.000 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:44.000 "name": "Existed_Raid", 00:32:44.000 "uuid": "89e49a0f-d085-4b69-b24b-bd7dc648597c", 00:32:44.000 "strip_size_kb": 0, 00:32:44.000 "state": "configuring", 00:32:44.000 "raid_level": "raid1", 00:32:44.000 "superblock": true, 00:32:44.000 "num_base_bdevs": 2, 00:32:44.000 "num_base_bdevs_discovered": 0, 00:32:44.000 "num_base_bdevs_operational": 2, 00:32:44.000 "base_bdevs_list": [ 00:32:44.000 { 00:32:44.000 "name": "BaseBdev1", 00:32:44.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:44.000 "is_configured": false, 00:32:44.000 "data_offset": 0, 00:32:44.000 "data_size": 0 00:32:44.000 }, 00:32:44.000 { 00:32:44.000 "name": "BaseBdev2", 00:32:44.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:44.000 "is_configured": false, 00:32:44.000 "data_offset": 0, 00:32:44.000 "data_size": 0 00:32:44.000 } 00:32:44.000 ] 00:32:44.000 }' 00:32:44.000 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:44.000 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:44.257 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:44.258 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.258 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:44.258 [2024-11-05 16:02:16.630898] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:44.258 [2024-11-05 16:02:16.630927] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:32:44.258 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.258 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:44.258 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.258 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:44.258 [2024-11-05 16:02:16.638892] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:44.258 [2024-11-05 16:02:16.638923] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:44.258 [2024-11-05 16:02:16.638931] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:44.258 [2024-11-05 16:02:16.638942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:44.258 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.258 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:32:44.258 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.258 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:44.258 [2024-11-05 16:02:16.671200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:44.258 BaseBdev1 00:32:44.258 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.258 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:32:44.258 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:32:44.258 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:32:44.258 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:32:44.258 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:32:44.258 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:32:44.258 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:32:44.258 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.258 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:44.516 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.516 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:44.516 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.516 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:44.516 [ 00:32:44.516 { 00:32:44.516 "name": "BaseBdev1", 00:32:44.516 "aliases": [ 00:32:44.516 "e483cb6b-6b0b-4b4a-8aef-b6c7d7862c3b" 00:32:44.516 ], 00:32:44.516 "product_name": "Malloc disk", 00:32:44.516 "block_size": 4096, 00:32:44.516 "num_blocks": 8192, 00:32:44.516 "uuid": "e483cb6b-6b0b-4b4a-8aef-b6c7d7862c3b", 00:32:44.516 "assigned_rate_limits": { 00:32:44.516 "rw_ios_per_sec": 0, 00:32:44.516 "rw_mbytes_per_sec": 0, 00:32:44.516 "r_mbytes_per_sec": 0, 00:32:44.516 "w_mbytes_per_sec": 0 00:32:44.516 }, 00:32:44.516 "claimed": true, 00:32:44.516 "claim_type": "exclusive_write", 00:32:44.516 "zoned": false, 00:32:44.516 "supported_io_types": { 00:32:44.516 "read": true, 00:32:44.516 "write": true, 00:32:44.516 "unmap": true, 00:32:44.516 "flush": true, 00:32:44.516 "reset": true, 00:32:44.516 "nvme_admin": false, 00:32:44.516 "nvme_io": false, 00:32:44.516 "nvme_io_md": false, 00:32:44.516 "write_zeroes": true, 00:32:44.516 "zcopy": true, 00:32:44.516 "get_zone_info": false, 00:32:44.516 "zone_management": false, 00:32:44.516 "zone_append": false, 00:32:44.516 "compare": false, 00:32:44.516 "compare_and_write": false, 00:32:44.516 "abort": true, 00:32:44.516 "seek_hole": false, 00:32:44.516 "seek_data": false, 00:32:44.516 "copy": true, 00:32:44.516 "nvme_iov_md": false 00:32:44.516 }, 00:32:44.516 "memory_domains": [ 00:32:44.516 { 00:32:44.516 "dma_device_id": "system", 00:32:44.516 "dma_device_type": 1 00:32:44.516 }, 00:32:44.516 { 00:32:44.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:44.516 "dma_device_type": 2 00:32:44.516 } 00:32:44.516 ], 00:32:44.516 "driver_specific": {} 00:32:44.516 } 00:32:44.516 ] 00:32:44.516 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.516 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:32:44.516 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:44.516 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:44.516 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:44.516 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:44.516 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:44.516 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:44.516 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:44.516 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:44.516 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:44.516 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:44.516 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:44.516 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.516 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:44.516 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:44.516 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.516 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:44.516 "name": "Existed_Raid", 00:32:44.516 "uuid": "f7b9780e-0f12-4a98-95d0-7f0933f00ec4", 00:32:44.516 "strip_size_kb": 0, 00:32:44.516 "state": "configuring", 00:32:44.516 "raid_level": "raid1", 00:32:44.516 "superblock": true, 00:32:44.516 "num_base_bdevs": 2, 00:32:44.516 "num_base_bdevs_discovered": 1, 00:32:44.516 "num_base_bdevs_operational": 2, 00:32:44.516 "base_bdevs_list": [ 00:32:44.516 { 00:32:44.516 "name": "BaseBdev1", 00:32:44.516 "uuid": "e483cb6b-6b0b-4b4a-8aef-b6c7d7862c3b", 00:32:44.516 "is_configured": true, 00:32:44.516 "data_offset": 256, 00:32:44.516 "data_size": 7936 00:32:44.516 }, 00:32:44.516 { 00:32:44.516 "name": "BaseBdev2", 00:32:44.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:44.516 "is_configured": false, 00:32:44.516 "data_offset": 0, 00:32:44.516 "data_size": 0 00:32:44.516 } 00:32:44.516 ] 00:32:44.516 }' 00:32:44.516 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:44.516 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:44.774 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:44.774 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.774 16:02:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:44.774 [2024-11-05 16:02:17.003289] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:44.774 [2024-11-05 16:02:17.003331] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:32:44.774 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.774 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:44.774 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.774 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:44.774 [2024-11-05 16:02:17.011321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:44.774 [2024-11-05 16:02:17.012822] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:44.774 [2024-11-05 16:02:17.012869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:44.774 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.774 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:32:44.774 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:44.774 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:44.774 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:44.774 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:44.774 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:44.774 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:44.774 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:44.774 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:44.774 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:44.774 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:44.774 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:44.774 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:44.774 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:44.774 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.774 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:44.774 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.774 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:44.774 "name": "Existed_Raid", 00:32:44.774 "uuid": "1e881cd9-ea45-4d56-b00f-c2048b33cc4a", 00:32:44.774 "strip_size_kb": 0, 00:32:44.774 "state": "configuring", 00:32:44.774 "raid_level": "raid1", 00:32:44.774 "superblock": true, 00:32:44.774 "num_base_bdevs": 2, 00:32:44.774 "num_base_bdevs_discovered": 1, 00:32:44.774 "num_base_bdevs_operational": 2, 00:32:44.774 "base_bdevs_list": [ 00:32:44.774 { 00:32:44.774 "name": "BaseBdev1", 00:32:44.774 "uuid": "e483cb6b-6b0b-4b4a-8aef-b6c7d7862c3b", 00:32:44.774 "is_configured": true, 00:32:44.774 "data_offset": 256, 00:32:44.774 "data_size": 7936 00:32:44.774 }, 00:32:44.774 { 00:32:44.774 "name": "BaseBdev2", 00:32:44.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:44.774 "is_configured": false, 00:32:44.774 "data_offset": 0, 00:32:44.774 "data_size": 0 00:32:44.774 } 00:32:44.774 ] 00:32:44.774 }' 00:32:44.774 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:44.774 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:45.032 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:32:45.032 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.032 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:45.032 [2024-11-05 16:02:17.329499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:45.032 [2024-11-05 16:02:17.329670] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:45.032 [2024-11-05 16:02:17.329681] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:45.032 BaseBdev2 00:32:45.032 [2024-11-05 16:02:17.329942] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:45.032 [2024-11-05 16:02:17.330062] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:45.032 [2024-11-05 16:02:17.330071] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:32:45.032 [2024-11-05 16:02:17.330179] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:45.032 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.032 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:32:45.032 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:32:45.032 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:32:45.032 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:32:45.032 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:32:45.032 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:32:45.032 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:32:45.032 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.032 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:45.032 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.032 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:45.032 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.032 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:45.032 [ 00:32:45.032 { 00:32:45.032 "name": "BaseBdev2", 00:32:45.032 "aliases": [ 00:32:45.032 "ae5be306-a4f5-43e3-a257-29e58d9604dc" 00:32:45.032 ], 00:32:45.032 "product_name": "Malloc disk", 00:32:45.032 "block_size": 4096, 00:32:45.032 "num_blocks": 8192, 00:32:45.032 "uuid": "ae5be306-a4f5-43e3-a257-29e58d9604dc", 00:32:45.032 "assigned_rate_limits": { 00:32:45.032 "rw_ios_per_sec": 0, 00:32:45.032 "rw_mbytes_per_sec": 0, 00:32:45.032 "r_mbytes_per_sec": 0, 00:32:45.032 "w_mbytes_per_sec": 0 00:32:45.032 }, 00:32:45.032 "claimed": true, 00:32:45.032 "claim_type": "exclusive_write", 00:32:45.032 "zoned": false, 00:32:45.032 "supported_io_types": { 00:32:45.032 "read": true, 00:32:45.032 "write": true, 00:32:45.032 "unmap": true, 00:32:45.032 "flush": true, 00:32:45.032 "reset": true, 00:32:45.032 "nvme_admin": false, 00:32:45.032 "nvme_io": false, 00:32:45.032 "nvme_io_md": false, 00:32:45.032 "write_zeroes": true, 00:32:45.032 "zcopy": true, 00:32:45.033 "get_zone_info": false, 00:32:45.033 "zone_management": false, 00:32:45.033 "zone_append": false, 00:32:45.033 "compare": false, 00:32:45.033 "compare_and_write": false, 00:32:45.033 "abort": true, 00:32:45.033 "seek_hole": false, 00:32:45.033 "seek_data": false, 00:32:45.033 "copy": true, 00:32:45.033 "nvme_iov_md": false 00:32:45.033 }, 00:32:45.033 "memory_domains": [ 00:32:45.033 { 00:32:45.033 "dma_device_id": "system", 00:32:45.033 "dma_device_type": 1 00:32:45.033 }, 00:32:45.033 { 00:32:45.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:45.033 "dma_device_type": 2 00:32:45.033 } 00:32:45.033 ], 00:32:45.033 "driver_specific": {} 00:32:45.033 } 00:32:45.033 ] 00:32:45.033 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.033 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:32:45.033 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:45.033 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:45.033 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:32:45.033 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:45.033 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:45.033 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:45.033 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:45.033 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:45.033 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:45.033 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:45.033 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:45.033 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:45.033 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:45.033 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:45.033 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.033 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:45.033 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.033 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:45.033 "name": "Existed_Raid", 00:32:45.033 "uuid": "1e881cd9-ea45-4d56-b00f-c2048b33cc4a", 00:32:45.033 "strip_size_kb": 0, 00:32:45.033 "state": "online", 00:32:45.033 "raid_level": "raid1", 00:32:45.033 "superblock": true, 00:32:45.033 "num_base_bdevs": 2, 00:32:45.033 "num_base_bdevs_discovered": 2, 00:32:45.033 "num_base_bdevs_operational": 2, 00:32:45.033 "base_bdevs_list": [ 00:32:45.033 { 00:32:45.033 "name": "BaseBdev1", 00:32:45.033 "uuid": "e483cb6b-6b0b-4b4a-8aef-b6c7d7862c3b", 00:32:45.033 "is_configured": true, 00:32:45.033 "data_offset": 256, 00:32:45.033 "data_size": 7936 00:32:45.033 }, 00:32:45.033 { 00:32:45.033 "name": "BaseBdev2", 00:32:45.033 "uuid": "ae5be306-a4f5-43e3-a257-29e58d9604dc", 00:32:45.033 "is_configured": true, 00:32:45.033 "data_offset": 256, 00:32:45.033 "data_size": 7936 00:32:45.033 } 00:32:45.033 ] 00:32:45.033 }' 00:32:45.033 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:45.033 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:45.290 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:32:45.291 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:32:45.291 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:45.291 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:45.291 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:32:45.291 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:45.291 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:32:45.291 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:45.291 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.291 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:45.291 [2024-11-05 16:02:17.689839] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:45.291 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.548 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:45.548 "name": "Existed_Raid", 00:32:45.548 "aliases": [ 00:32:45.548 "1e881cd9-ea45-4d56-b00f-c2048b33cc4a" 00:32:45.548 ], 00:32:45.548 "product_name": "Raid Volume", 00:32:45.548 "block_size": 4096, 00:32:45.548 "num_blocks": 7936, 00:32:45.548 "uuid": "1e881cd9-ea45-4d56-b00f-c2048b33cc4a", 00:32:45.548 "assigned_rate_limits": { 00:32:45.548 "rw_ios_per_sec": 0, 00:32:45.548 "rw_mbytes_per_sec": 0, 00:32:45.548 "r_mbytes_per_sec": 0, 00:32:45.548 "w_mbytes_per_sec": 0 00:32:45.548 }, 00:32:45.548 "claimed": false, 00:32:45.548 "zoned": false, 00:32:45.548 "supported_io_types": { 00:32:45.548 "read": true, 00:32:45.548 "write": true, 00:32:45.548 "unmap": false, 00:32:45.548 "flush": false, 00:32:45.548 "reset": true, 00:32:45.548 "nvme_admin": false, 00:32:45.548 "nvme_io": false, 00:32:45.548 "nvme_io_md": false, 00:32:45.548 "write_zeroes": true, 00:32:45.548 "zcopy": false, 00:32:45.548 "get_zone_info": false, 00:32:45.548 "zone_management": false, 00:32:45.548 "zone_append": false, 00:32:45.548 "compare": false, 00:32:45.548 "compare_and_write": false, 00:32:45.548 "abort": false, 00:32:45.548 "seek_hole": false, 00:32:45.548 "seek_data": false, 00:32:45.548 "copy": false, 00:32:45.548 "nvme_iov_md": false 00:32:45.548 }, 00:32:45.548 "memory_domains": [ 00:32:45.548 { 00:32:45.548 "dma_device_id": "system", 00:32:45.548 "dma_device_type": 1 00:32:45.548 }, 00:32:45.548 { 00:32:45.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:45.548 "dma_device_type": 2 00:32:45.548 }, 00:32:45.548 { 00:32:45.548 "dma_device_id": "system", 00:32:45.548 "dma_device_type": 1 00:32:45.548 }, 00:32:45.548 { 00:32:45.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:45.548 "dma_device_type": 2 00:32:45.548 } 00:32:45.548 ], 00:32:45.548 "driver_specific": { 00:32:45.548 "raid": { 00:32:45.548 "uuid": "1e881cd9-ea45-4d56-b00f-c2048b33cc4a", 00:32:45.548 "strip_size_kb": 0, 00:32:45.548 "state": "online", 00:32:45.548 "raid_level": "raid1", 00:32:45.548 "superblock": true, 00:32:45.548 "num_base_bdevs": 2, 00:32:45.548 "num_base_bdevs_discovered": 2, 00:32:45.548 "num_base_bdevs_operational": 2, 00:32:45.548 "base_bdevs_list": [ 00:32:45.548 { 00:32:45.548 "name": "BaseBdev1", 00:32:45.548 "uuid": "e483cb6b-6b0b-4b4a-8aef-b6c7d7862c3b", 00:32:45.548 "is_configured": true, 00:32:45.548 "data_offset": 256, 00:32:45.548 "data_size": 7936 00:32:45.548 }, 00:32:45.548 { 00:32:45.548 "name": "BaseBdev2", 00:32:45.548 "uuid": "ae5be306-a4f5-43e3-a257-29e58d9604dc", 00:32:45.548 "is_configured": true, 00:32:45.549 "data_offset": 256, 00:32:45.549 "data_size": 7936 00:32:45.549 } 00:32:45.549 ] 00:32:45.549 } 00:32:45.549 } 00:32:45.549 }' 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:32:45.549 BaseBdev2' 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:45.549 [2024-11-05 16:02:17.841656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:45.549 "name": "Existed_Raid", 00:32:45.549 "uuid": "1e881cd9-ea45-4d56-b00f-c2048b33cc4a", 00:32:45.549 "strip_size_kb": 0, 00:32:45.549 "state": "online", 00:32:45.549 "raid_level": "raid1", 00:32:45.549 "superblock": true, 00:32:45.549 "num_base_bdevs": 2, 00:32:45.549 "num_base_bdevs_discovered": 1, 00:32:45.549 "num_base_bdevs_operational": 1, 00:32:45.549 "base_bdevs_list": [ 00:32:45.549 { 00:32:45.549 "name": null, 00:32:45.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:45.549 "is_configured": false, 00:32:45.549 "data_offset": 0, 00:32:45.549 "data_size": 7936 00:32:45.549 }, 00:32:45.549 { 00:32:45.549 "name": "BaseBdev2", 00:32:45.549 "uuid": "ae5be306-a4f5-43e3-a257-29e58d9604dc", 00:32:45.549 "is_configured": true, 00:32:45.549 "data_offset": 256, 00:32:45.549 "data_size": 7936 00:32:45.549 } 00:32:45.549 ] 00:32:45.549 }' 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:45.549 16:02:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:45.807 16:02:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:32:45.807 16:02:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:45.807 16:02:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:45.807 16:02:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:45.807 16:02:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.807 16:02:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:45.807 16:02:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.064 16:02:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:46.064 16:02:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:46.064 16:02:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:32:46.064 16:02:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.065 16:02:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:46.065 [2024-11-05 16:02:18.228192] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:46.065 [2024-11-05 16:02:18.228400] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:46.065 [2024-11-05 16:02:18.275225] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:46.065 [2024-11-05 16:02:18.275380] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:46.065 [2024-11-05 16:02:18.275397] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:32:46.065 16:02:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.065 16:02:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:46.065 16:02:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:46.065 16:02:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:46.065 16:02:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:32:46.065 16:02:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.065 16:02:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:46.065 16:02:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.065 16:02:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:32:46.065 16:02:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:32:46.065 16:02:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:32:46.065 16:02:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 83169 00:32:46.065 16:02:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 83169 ']' 00:32:46.065 16:02:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 83169 00:32:46.065 16:02:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:32:46.065 16:02:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:46.065 16:02:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83169 00:32:46.065 killing process with pid 83169 00:32:46.065 16:02:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:46.065 16:02:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:46.065 16:02:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83169' 00:32:46.065 16:02:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@971 -- # kill 83169 00:32:46.065 [2024-11-05 16:02:18.337955] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:46.065 16:02:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@976 -- # wait 83169 00:32:46.065 [2024-11-05 16:02:18.346413] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:46.631 16:02:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:32:46.631 00:32:46.631 real 0m3.523s 00:32:46.631 user 0m5.186s 00:32:46.631 sys 0m0.533s 00:32:46.631 16:02:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:46.631 ************************************ 00:32:46.631 END TEST raid_state_function_test_sb_4k 00:32:46.631 ************************************ 00:32:46.631 16:02:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:46.631 16:02:18 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:32:46.631 16:02:18 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:46.631 16:02:18 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:46.631 16:02:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:46.631 ************************************ 00:32:46.631 START TEST raid_superblock_test_4k 00:32:46.631 ************************************ 00:32:46.631 16:02:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:32:46.631 16:02:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:32:46.631 16:02:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:32:46.631 16:02:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:32:46.631 16:02:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:32:46.631 16:02:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:32:46.631 16:02:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:32:46.631 16:02:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:32:46.631 16:02:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:32:46.631 16:02:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:32:46.631 16:02:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:32:46.631 16:02:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:32:46.631 16:02:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:32:46.631 16:02:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:32:46.631 16:02:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:32:46.631 16:02:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:32:46.631 16:02:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=83408 00:32:46.631 16:02:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 83408 00:32:46.631 16:02:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # '[' -z 83408 ']' 00:32:46.631 16:02:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:46.631 16:02:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:46.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:46.631 16:02:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:46.631 16:02:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:46.631 16:02:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:46.631 16:02:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:32:46.631 [2024-11-05 16:02:19.020857] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:32:46.631 [2024-11-05 16:02:19.021121] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83408 ] 00:32:46.889 [2024-11-05 16:02:19.181369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:46.889 [2024-11-05 16:02:19.263952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:47.147 [2024-11-05 16:02:19.372321] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:47.147 [2024-11-05 16:02:19.372348] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@866 -- # return 0 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:47.717 malloc1 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:47.717 [2024-11-05 16:02:19.893800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:47.717 [2024-11-05 16:02:19.893965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:47.717 [2024-11-05 16:02:19.894003] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:32:47.717 [2024-11-05 16:02:19.894056] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:47.717 [2024-11-05 16:02:19.895816] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:47.717 [2024-11-05 16:02:19.895929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:47.717 pt1 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:47.717 malloc2 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:47.717 [2024-11-05 16:02:19.925126] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:47.717 [2024-11-05 16:02:19.925164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:47.717 [2024-11-05 16:02:19.925178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:32:47.717 [2024-11-05 16:02:19.925185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:47.717 [2024-11-05 16:02:19.926895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:47.717 [2024-11-05 16:02:19.926922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:47.717 pt2 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.717 16:02:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:47.717 [2024-11-05 16:02:19.933165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:47.717 [2024-11-05 16:02:19.934730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:47.717 [2024-11-05 16:02:19.934938] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:32:47.717 [2024-11-05 16:02:19.935001] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:47.717 [2024-11-05 16:02:19.935236] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:47.718 [2024-11-05 16:02:19.935407] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:32:47.718 [2024-11-05 16:02:19.935464] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:32:47.718 [2024-11-05 16:02:19.935648] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:47.718 16:02:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.718 16:02:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:47.718 16:02:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:47.718 16:02:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:47.718 16:02:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:47.718 16:02:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:47.718 16:02:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:47.718 16:02:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:47.718 16:02:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:47.718 16:02:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:47.718 16:02:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:47.718 16:02:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:47.718 16:02:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:47.718 16:02:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.718 16:02:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:47.718 16:02:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.718 16:02:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:47.718 "name": "raid_bdev1", 00:32:47.718 "uuid": "9fe414b0-88fb-4b9f-a7a3-7b6f01ca6d5e", 00:32:47.718 "strip_size_kb": 0, 00:32:47.718 "state": "online", 00:32:47.718 "raid_level": "raid1", 00:32:47.718 "superblock": true, 00:32:47.718 "num_base_bdevs": 2, 00:32:47.718 "num_base_bdevs_discovered": 2, 00:32:47.718 "num_base_bdevs_operational": 2, 00:32:47.718 "base_bdevs_list": [ 00:32:47.718 { 00:32:47.718 "name": "pt1", 00:32:47.718 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:47.718 "is_configured": true, 00:32:47.718 "data_offset": 256, 00:32:47.718 "data_size": 7936 00:32:47.718 }, 00:32:47.718 { 00:32:47.718 "name": "pt2", 00:32:47.718 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:47.718 "is_configured": true, 00:32:47.718 "data_offset": 256, 00:32:47.718 "data_size": 7936 00:32:47.718 } 00:32:47.718 ] 00:32:47.718 }' 00:32:47.718 16:02:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:47.718 16:02:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:47.976 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:32:47.976 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:32:47.976 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:47.976 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:47.976 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:32:47.976 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:47.976 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:47.976 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.976 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:47.976 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:47.976 [2024-11-05 16:02:20.245463] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:47.976 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.976 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:47.976 "name": "raid_bdev1", 00:32:47.976 "aliases": [ 00:32:47.976 "9fe414b0-88fb-4b9f-a7a3-7b6f01ca6d5e" 00:32:47.976 ], 00:32:47.976 "product_name": "Raid Volume", 00:32:47.976 "block_size": 4096, 00:32:47.976 "num_blocks": 7936, 00:32:47.976 "uuid": "9fe414b0-88fb-4b9f-a7a3-7b6f01ca6d5e", 00:32:47.976 "assigned_rate_limits": { 00:32:47.976 "rw_ios_per_sec": 0, 00:32:47.976 "rw_mbytes_per_sec": 0, 00:32:47.976 "r_mbytes_per_sec": 0, 00:32:47.976 "w_mbytes_per_sec": 0 00:32:47.976 }, 00:32:47.976 "claimed": false, 00:32:47.976 "zoned": false, 00:32:47.976 "supported_io_types": { 00:32:47.976 "read": true, 00:32:47.976 "write": true, 00:32:47.976 "unmap": false, 00:32:47.976 "flush": false, 00:32:47.976 "reset": true, 00:32:47.976 "nvme_admin": false, 00:32:47.976 "nvme_io": false, 00:32:47.976 "nvme_io_md": false, 00:32:47.976 "write_zeroes": true, 00:32:47.976 "zcopy": false, 00:32:47.976 "get_zone_info": false, 00:32:47.976 "zone_management": false, 00:32:47.976 "zone_append": false, 00:32:47.976 "compare": false, 00:32:47.976 "compare_and_write": false, 00:32:47.976 "abort": false, 00:32:47.976 "seek_hole": false, 00:32:47.976 "seek_data": false, 00:32:47.976 "copy": false, 00:32:47.976 "nvme_iov_md": false 00:32:47.976 }, 00:32:47.976 "memory_domains": [ 00:32:47.976 { 00:32:47.976 "dma_device_id": "system", 00:32:47.976 "dma_device_type": 1 00:32:47.976 }, 00:32:47.976 { 00:32:47.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:47.976 "dma_device_type": 2 00:32:47.976 }, 00:32:47.976 { 00:32:47.976 "dma_device_id": "system", 00:32:47.976 "dma_device_type": 1 00:32:47.976 }, 00:32:47.976 { 00:32:47.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:47.976 "dma_device_type": 2 00:32:47.976 } 00:32:47.976 ], 00:32:47.976 "driver_specific": { 00:32:47.976 "raid": { 00:32:47.976 "uuid": "9fe414b0-88fb-4b9f-a7a3-7b6f01ca6d5e", 00:32:47.976 "strip_size_kb": 0, 00:32:47.976 "state": "online", 00:32:47.976 "raid_level": "raid1", 00:32:47.976 "superblock": true, 00:32:47.976 "num_base_bdevs": 2, 00:32:47.976 "num_base_bdevs_discovered": 2, 00:32:47.976 "num_base_bdevs_operational": 2, 00:32:47.976 "base_bdevs_list": [ 00:32:47.976 { 00:32:47.976 "name": "pt1", 00:32:47.976 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:47.976 "is_configured": true, 00:32:47.976 "data_offset": 256, 00:32:47.976 "data_size": 7936 00:32:47.976 }, 00:32:47.976 { 00:32:47.976 "name": "pt2", 00:32:47.976 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:47.976 "is_configured": true, 00:32:47.976 "data_offset": 256, 00:32:47.976 "data_size": 7936 00:32:47.976 } 00:32:47.976 ] 00:32:47.976 } 00:32:47.976 } 00:32:47.976 }' 00:32:47.976 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:47.976 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:32:47.976 pt2' 00:32:47.976 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:47.976 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:32:47.977 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:47.977 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:32:47.977 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:47.977 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.977 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:47.977 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.977 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:32:47.977 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:32:47.977 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:47.977 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:32:47.977 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.977 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:47.977 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:48.235 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.235 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:32:48.235 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:32:48.235 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:48.235 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:32:48.235 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.235 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:48.235 [2024-11-05 16:02:20.417465] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:48.235 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.235 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9fe414b0-88fb-4b9f-a7a3-7b6f01ca6d5e 00:32:48.235 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 9fe414b0-88fb-4b9f-a7a3-7b6f01ca6d5e ']' 00:32:48.235 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:48.235 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.235 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:48.235 [2024-11-05 16:02:20.445214] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:48.235 [2024-11-05 16:02:20.445307] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:48.235 [2024-11-05 16:02:20.445411] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:48.235 [2024-11-05 16:02:20.445473] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:48.235 [2024-11-05 16:02:20.445593] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:32:48.235 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.235 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:48.235 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.235 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:32:48.235 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:48.235 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.235 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:32:48.235 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:32:48.235 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:48.235 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:32:48.235 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.235 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:48.235 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.235 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:48.235 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:32:48.235 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.235 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:48.235 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:48.236 [2024-11-05 16:02:20.537259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:32:48.236 [2024-11-05 16:02:20.538859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:32:48.236 [2024-11-05 16:02:20.538986] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:32:48.236 [2024-11-05 16:02:20.539030] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:32:48.236 [2024-11-05 16:02:20.539042] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:48.236 [2024-11-05 16:02:20.539050] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:32:48.236 request: 00:32:48.236 { 00:32:48.236 "name": "raid_bdev1", 00:32:48.236 "raid_level": "raid1", 00:32:48.236 "base_bdevs": [ 00:32:48.236 "malloc1", 00:32:48.236 "malloc2" 00:32:48.236 ], 00:32:48.236 "superblock": false, 00:32:48.236 "method": "bdev_raid_create", 00:32:48.236 "req_id": 1 00:32:48.236 } 00:32:48.236 Got JSON-RPC error response 00:32:48.236 response: 00:32:48.236 { 00:32:48.236 "code": -17, 00:32:48.236 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:32:48.236 } 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:48.236 [2024-11-05 16:02:20.581259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:48.236 [2024-11-05 16:02:20.581369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:48.236 [2024-11-05 16:02:20.581398] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:32:48.236 [2024-11-05 16:02:20.581442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:48.236 [2024-11-05 16:02:20.583234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:48.236 [2024-11-05 16:02:20.583329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:48.236 [2024-11-05 16:02:20.583433] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:32:48.236 [2024-11-05 16:02:20.583549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:48.236 pt1 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:48.236 "name": "raid_bdev1", 00:32:48.236 "uuid": "9fe414b0-88fb-4b9f-a7a3-7b6f01ca6d5e", 00:32:48.236 "strip_size_kb": 0, 00:32:48.236 "state": "configuring", 00:32:48.236 "raid_level": "raid1", 00:32:48.236 "superblock": true, 00:32:48.236 "num_base_bdevs": 2, 00:32:48.236 "num_base_bdevs_discovered": 1, 00:32:48.236 "num_base_bdevs_operational": 2, 00:32:48.236 "base_bdevs_list": [ 00:32:48.236 { 00:32:48.236 "name": "pt1", 00:32:48.236 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:48.236 "is_configured": true, 00:32:48.236 "data_offset": 256, 00:32:48.236 "data_size": 7936 00:32:48.236 }, 00:32:48.236 { 00:32:48.236 "name": null, 00:32:48.236 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:48.236 "is_configured": false, 00:32:48.236 "data_offset": 256, 00:32:48.236 "data_size": 7936 00:32:48.236 } 00:32:48.236 ] 00:32:48.236 }' 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:48.236 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:48.495 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:32:48.495 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:32:48.495 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:48.495 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:48.495 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.495 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:48.495 [2024-11-05 16:02:20.901339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:48.495 [2024-11-05 16:02:20.901478] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:48.495 [2024-11-05 16:02:20.901542] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:32:48.495 [2024-11-05 16:02:20.901613] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:48.495 [2024-11-05 16:02:20.901996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:48.495 [2024-11-05 16:02:20.902091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:48.495 [2024-11-05 16:02:20.902194] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:48.495 [2024-11-05 16:02:20.902282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:48.495 [2024-11-05 16:02:20.902388] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:48.495 [2024-11-05 16:02:20.902397] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:48.495 [2024-11-05 16:02:20.902602] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:32:48.495 [2024-11-05 16:02:20.902712] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:48.495 [2024-11-05 16:02:20.902718] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:32:48.495 [2024-11-05 16:02:20.902822] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:48.495 pt2 00:32:48.495 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.495 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:32:48.495 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:48.495 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:48.495 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:48.495 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:48.495 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:48.495 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:48.495 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:48.495 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:48.495 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:48.495 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:48.495 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:48.495 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:48.495 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:48.495 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.495 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:48.754 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.754 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:48.754 "name": "raid_bdev1", 00:32:48.754 "uuid": "9fe414b0-88fb-4b9f-a7a3-7b6f01ca6d5e", 00:32:48.754 "strip_size_kb": 0, 00:32:48.754 "state": "online", 00:32:48.754 "raid_level": "raid1", 00:32:48.754 "superblock": true, 00:32:48.754 "num_base_bdevs": 2, 00:32:48.754 "num_base_bdevs_discovered": 2, 00:32:48.754 "num_base_bdevs_operational": 2, 00:32:48.754 "base_bdevs_list": [ 00:32:48.754 { 00:32:48.754 "name": "pt1", 00:32:48.754 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:48.754 "is_configured": true, 00:32:48.754 "data_offset": 256, 00:32:48.754 "data_size": 7936 00:32:48.754 }, 00:32:48.754 { 00:32:48.754 "name": "pt2", 00:32:48.754 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:48.754 "is_configured": true, 00:32:48.754 "data_offset": 256, 00:32:48.754 "data_size": 7936 00:32:48.754 } 00:32:48.754 ] 00:32:48.754 }' 00:32:48.754 16:02:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:48.754 16:02:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:49.013 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:32:49.013 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:32:49.013 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:49.013 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:49.013 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:32:49.013 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:49.013 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:49.013 16:02:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.013 16:02:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:49.013 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:49.013 [2024-11-05 16:02:21.225608] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:49.013 16:02:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.013 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:49.013 "name": "raid_bdev1", 00:32:49.013 "aliases": [ 00:32:49.013 "9fe414b0-88fb-4b9f-a7a3-7b6f01ca6d5e" 00:32:49.013 ], 00:32:49.013 "product_name": "Raid Volume", 00:32:49.013 "block_size": 4096, 00:32:49.013 "num_blocks": 7936, 00:32:49.013 "uuid": "9fe414b0-88fb-4b9f-a7a3-7b6f01ca6d5e", 00:32:49.013 "assigned_rate_limits": { 00:32:49.013 "rw_ios_per_sec": 0, 00:32:49.013 "rw_mbytes_per_sec": 0, 00:32:49.013 "r_mbytes_per_sec": 0, 00:32:49.013 "w_mbytes_per_sec": 0 00:32:49.013 }, 00:32:49.013 "claimed": false, 00:32:49.013 "zoned": false, 00:32:49.013 "supported_io_types": { 00:32:49.013 "read": true, 00:32:49.013 "write": true, 00:32:49.013 "unmap": false, 00:32:49.013 "flush": false, 00:32:49.013 "reset": true, 00:32:49.013 "nvme_admin": false, 00:32:49.013 "nvme_io": false, 00:32:49.013 "nvme_io_md": false, 00:32:49.013 "write_zeroes": true, 00:32:49.013 "zcopy": false, 00:32:49.013 "get_zone_info": false, 00:32:49.013 "zone_management": false, 00:32:49.013 "zone_append": false, 00:32:49.013 "compare": false, 00:32:49.013 "compare_and_write": false, 00:32:49.013 "abort": false, 00:32:49.013 "seek_hole": false, 00:32:49.013 "seek_data": false, 00:32:49.013 "copy": false, 00:32:49.013 "nvme_iov_md": false 00:32:49.013 }, 00:32:49.013 "memory_domains": [ 00:32:49.013 { 00:32:49.013 "dma_device_id": "system", 00:32:49.013 "dma_device_type": 1 00:32:49.013 }, 00:32:49.013 { 00:32:49.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:49.013 "dma_device_type": 2 00:32:49.013 }, 00:32:49.013 { 00:32:49.013 "dma_device_id": "system", 00:32:49.013 "dma_device_type": 1 00:32:49.013 }, 00:32:49.013 { 00:32:49.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:49.013 "dma_device_type": 2 00:32:49.013 } 00:32:49.013 ], 00:32:49.013 "driver_specific": { 00:32:49.013 "raid": { 00:32:49.013 "uuid": "9fe414b0-88fb-4b9f-a7a3-7b6f01ca6d5e", 00:32:49.013 "strip_size_kb": 0, 00:32:49.013 "state": "online", 00:32:49.013 "raid_level": "raid1", 00:32:49.014 "superblock": true, 00:32:49.014 "num_base_bdevs": 2, 00:32:49.014 "num_base_bdevs_discovered": 2, 00:32:49.014 "num_base_bdevs_operational": 2, 00:32:49.014 "base_bdevs_list": [ 00:32:49.014 { 00:32:49.014 "name": "pt1", 00:32:49.014 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:49.014 "is_configured": true, 00:32:49.014 "data_offset": 256, 00:32:49.014 "data_size": 7936 00:32:49.014 }, 00:32:49.014 { 00:32:49.014 "name": "pt2", 00:32:49.014 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:49.014 "is_configured": true, 00:32:49.014 "data_offset": 256, 00:32:49.014 "data_size": 7936 00:32:49.014 } 00:32:49.014 ] 00:32:49.014 } 00:32:49.014 } 00:32:49.014 }' 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:32:49.014 pt2' 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:49.014 [2024-11-05 16:02:21.377596] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 9fe414b0-88fb-4b9f-a7a3-7b6f01ca6d5e '!=' 9fe414b0-88fb-4b9f-a7a3-7b6f01ca6d5e ']' 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:49.014 [2024-11-05 16:02:21.409427] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:49.014 16:02:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.272 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:49.272 "name": "raid_bdev1", 00:32:49.272 "uuid": "9fe414b0-88fb-4b9f-a7a3-7b6f01ca6d5e", 00:32:49.272 "strip_size_kb": 0, 00:32:49.272 "state": "online", 00:32:49.272 "raid_level": "raid1", 00:32:49.272 "superblock": true, 00:32:49.272 "num_base_bdevs": 2, 00:32:49.272 "num_base_bdevs_discovered": 1, 00:32:49.272 "num_base_bdevs_operational": 1, 00:32:49.272 "base_bdevs_list": [ 00:32:49.272 { 00:32:49.272 "name": null, 00:32:49.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:49.272 "is_configured": false, 00:32:49.272 "data_offset": 0, 00:32:49.272 "data_size": 7936 00:32:49.272 }, 00:32:49.272 { 00:32:49.272 "name": "pt2", 00:32:49.272 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:49.272 "is_configured": true, 00:32:49.272 "data_offset": 256, 00:32:49.272 "data_size": 7936 00:32:49.272 } 00:32:49.272 ] 00:32:49.272 }' 00:32:49.272 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:49.272 16:02:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:49.531 [2024-11-05 16:02:21.717479] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:49.531 [2024-11-05 16:02:21.717592] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:49.531 [2024-11-05 16:02:21.717658] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:49.531 [2024-11-05 16:02:21.717695] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:49.531 [2024-11-05 16:02:21.717704] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:49.531 [2024-11-05 16:02:21.765477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:49.531 [2024-11-05 16:02:21.765520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:49.531 [2024-11-05 16:02:21.765532] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:32:49.531 [2024-11-05 16:02:21.765541] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:49.531 [2024-11-05 16:02:21.767349] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:49.531 [2024-11-05 16:02:21.767378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:49.531 [2024-11-05 16:02:21.767436] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:49.531 [2024-11-05 16:02:21.767473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:49.531 [2024-11-05 16:02:21.767546] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:32:49.531 [2024-11-05 16:02:21.767560] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:49.531 [2024-11-05 16:02:21.767746] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:32:49.531 [2024-11-05 16:02:21.767864] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:32:49.531 [2024-11-05 16:02:21.767874] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:32:49.531 [2024-11-05 16:02:21.767981] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:49.531 pt2 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:49.531 "name": "raid_bdev1", 00:32:49.531 "uuid": "9fe414b0-88fb-4b9f-a7a3-7b6f01ca6d5e", 00:32:49.531 "strip_size_kb": 0, 00:32:49.531 "state": "online", 00:32:49.531 "raid_level": "raid1", 00:32:49.531 "superblock": true, 00:32:49.531 "num_base_bdevs": 2, 00:32:49.531 "num_base_bdevs_discovered": 1, 00:32:49.531 "num_base_bdevs_operational": 1, 00:32:49.531 "base_bdevs_list": [ 00:32:49.531 { 00:32:49.531 "name": null, 00:32:49.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:49.531 "is_configured": false, 00:32:49.531 "data_offset": 256, 00:32:49.531 "data_size": 7936 00:32:49.531 }, 00:32:49.531 { 00:32:49.531 "name": "pt2", 00:32:49.531 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:49.531 "is_configured": true, 00:32:49.531 "data_offset": 256, 00:32:49.531 "data_size": 7936 00:32:49.531 } 00:32:49.531 ] 00:32:49.531 }' 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:49.531 16:02:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:49.789 16:02:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:49.789 16:02:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.789 16:02:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:49.789 [2024-11-05 16:02:22.077535] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:49.789 [2024-11-05 16:02:22.077560] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:49.789 [2024-11-05 16:02:22.077618] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:49.789 [2024-11-05 16:02:22.077657] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:49.789 [2024-11-05 16:02:22.077664] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:32:49.789 16:02:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.789 16:02:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:49.789 16:02:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.789 16:02:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:49.789 16:02:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:32:49.789 16:02:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.789 16:02:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:32:49.789 16:02:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:32:49.789 16:02:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:32:49.789 16:02:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:49.789 16:02:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.789 16:02:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:49.789 [2024-11-05 16:02:22.117541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:49.789 [2024-11-05 16:02:22.117582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:49.789 [2024-11-05 16:02:22.117596] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:32:49.789 [2024-11-05 16:02:22.117603] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:49.789 [2024-11-05 16:02:22.119354] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:49.789 [2024-11-05 16:02:22.119380] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:49.789 [2024-11-05 16:02:22.119437] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:32:49.789 [2024-11-05 16:02:22.119471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:49.789 [2024-11-05 16:02:22.119564] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:32:49.789 [2024-11-05 16:02:22.119578] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:49.789 [2024-11-05 16:02:22.119590] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:32:49.789 [2024-11-05 16:02:22.119632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:49.789 [2024-11-05 16:02:22.119688] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:32:49.789 [2024-11-05 16:02:22.119694] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:49.789 [2024-11-05 16:02:22.119893] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:32:49.789 [2024-11-05 16:02:22.119991] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:32:49.789 [2024-11-05 16:02:22.119999] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:32:49.789 [2024-11-05 16:02:22.120101] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:49.789 pt1 00:32:49.789 16:02:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.789 16:02:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:32:49.789 16:02:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:49.789 16:02:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:49.789 16:02:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:49.789 16:02:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:49.789 16:02:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:49.789 16:02:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:49.789 16:02:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:49.789 16:02:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:49.789 16:02:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:49.790 16:02:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:49.790 16:02:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:49.790 16:02:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.790 16:02:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:49.790 16:02:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:49.790 16:02:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.790 16:02:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:49.790 "name": "raid_bdev1", 00:32:49.790 "uuid": "9fe414b0-88fb-4b9f-a7a3-7b6f01ca6d5e", 00:32:49.790 "strip_size_kb": 0, 00:32:49.790 "state": "online", 00:32:49.790 "raid_level": "raid1", 00:32:49.790 "superblock": true, 00:32:49.790 "num_base_bdevs": 2, 00:32:49.790 "num_base_bdevs_discovered": 1, 00:32:49.790 "num_base_bdevs_operational": 1, 00:32:49.790 "base_bdevs_list": [ 00:32:49.790 { 00:32:49.790 "name": null, 00:32:49.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:49.790 "is_configured": false, 00:32:49.790 "data_offset": 256, 00:32:49.790 "data_size": 7936 00:32:49.790 }, 00:32:49.790 { 00:32:49.790 "name": "pt2", 00:32:49.790 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:49.790 "is_configured": true, 00:32:49.790 "data_offset": 256, 00:32:49.790 "data_size": 7936 00:32:49.790 } 00:32:49.790 ] 00:32:49.790 }' 00:32:49.790 16:02:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:49.790 16:02:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:50.048 16:02:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:32:50.048 16:02:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.048 16:02:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:50.048 16:02:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:32:50.048 16:02:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.048 16:02:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:32:50.048 16:02:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:50.048 16:02:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:32:50.048 16:02:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.048 16:02:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:50.048 [2024-11-05 16:02:22.457795] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:50.306 16:02:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.306 16:02:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 9fe414b0-88fb-4b9f-a7a3-7b6f01ca6d5e '!=' 9fe414b0-88fb-4b9f-a7a3-7b6f01ca6d5e ']' 00:32:50.306 16:02:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 83408 00:32:50.306 16:02:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # '[' -z 83408 ']' 00:32:50.306 16:02:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # kill -0 83408 00:32:50.306 16:02:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # uname 00:32:50.306 16:02:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:50.306 16:02:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83408 00:32:50.306 16:02:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:50.306 16:02:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:50.306 killing process with pid 83408 00:32:50.306 16:02:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83408' 00:32:50.306 16:02:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@971 -- # kill 83408 00:32:50.306 [2024-11-05 16:02:22.502231] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:50.306 [2024-11-05 16:02:22.502289] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:50.306 [2024-11-05 16:02:22.502323] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:50.306 [2024-11-05 16:02:22.502333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:32:50.306 16:02:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@976 -- # wait 83408 00:32:50.306 [2024-11-05 16:02:22.602219] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:50.875 16:02:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:32:50.875 00:32:50.875 real 0m4.198s 00:32:50.875 user 0m6.452s 00:32:50.875 sys 0m0.702s 00:32:50.875 16:02:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:50.875 16:02:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:50.875 ************************************ 00:32:50.875 END TEST raid_superblock_test_4k 00:32:50.875 ************************************ 00:32:50.875 16:02:23 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:32:50.875 16:02:23 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:32:50.875 16:02:23 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:32:50.875 16:02:23 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:50.875 16:02:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:50.875 ************************************ 00:32:50.875 START TEST raid_rebuild_test_sb_4k 00:32:50.875 ************************************ 00:32:50.875 16:02:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:32:50.875 16:02:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:32:50.875 16:02:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:32:50.875 16:02:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:32:50.875 16:02:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:32:50.875 16:02:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:32:50.875 16:02:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:32:50.875 16:02:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:50.875 16:02:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:32:50.875 16:02:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:32:50.875 16:02:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:50.875 16:02:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:32:50.875 16:02:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:32:50.875 16:02:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:50.875 16:02:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:50.875 16:02:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:32:50.875 16:02:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:32:50.875 16:02:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:32:50.875 16:02:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:32:50.875 16:02:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:32:50.875 16:02:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:32:50.875 16:02:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:32:50.875 16:02:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:32:50.875 16:02:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:32:50.875 16:02:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:32:50.875 16:02:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=83718 00:32:50.875 16:02:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 83718 00:32:50.875 16:02:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 83718 ']' 00:32:50.875 16:02:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:50.875 16:02:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:50.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:50.875 16:02:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:50.875 16:02:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:50.875 16:02:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:50.875 16:02:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:32:50.875 I/O size of 3145728 is greater than zero copy threshold (65536). 00:32:50.875 Zero copy mechanism will not be used. 00:32:50.875 [2024-11-05 16:02:23.280267] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:32:50.875 [2024-11-05 16:02:23.280433] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83718 ] 00:32:51.133 [2024-11-05 16:02:23.441332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:51.133 [2024-11-05 16:02:23.542512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:51.432 [2024-11-05 16:02:23.677491] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:51.432 [2024-11-05 16:02:23.677527] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:52.000 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:52.000 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:32:52.000 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:52.000 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:32:52.000 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.000 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:52.000 BaseBdev1_malloc 00:32:52.000 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.000 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:52.000 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.000 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:52.000 [2024-11-05 16:02:24.154472] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:52.000 [2024-11-05 16:02:24.154548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:52.000 [2024-11-05 16:02:24.154570] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:32:52.000 [2024-11-05 16:02:24.154581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:52.000 [2024-11-05 16:02:24.156692] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:52.000 [2024-11-05 16:02:24.156727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:52.000 BaseBdev1 00:32:52.000 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.000 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:52.000 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:32:52.000 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.000 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:52.000 BaseBdev2_malloc 00:32:52.000 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.000 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:32:52.000 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.000 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:52.000 [2024-11-05 16:02:24.190162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:32:52.000 [2024-11-05 16:02:24.190218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:52.000 [2024-11-05 16:02:24.190234] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:32:52.000 [2024-11-05 16:02:24.190244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:52.000 [2024-11-05 16:02:24.192317] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:52.000 [2024-11-05 16:02:24.192353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:52.000 BaseBdev2 00:32:52.000 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.001 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:32:52.001 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.001 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:52.001 spare_malloc 00:32:52.001 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.001 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:32:52.001 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.001 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:52.001 spare_delay 00:32:52.001 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.001 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:32:52.001 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.001 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:52.001 [2024-11-05 16:02:24.247971] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:52.001 [2024-11-05 16:02:24.248025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:52.001 [2024-11-05 16:02:24.248041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:32:52.001 [2024-11-05 16:02:24.248052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:52.001 [2024-11-05 16:02:24.250130] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:52.001 [2024-11-05 16:02:24.250162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:52.001 spare 00:32:52.001 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.001 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:32:52.001 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.001 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:52.001 [2024-11-05 16:02:24.256031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:52.001 [2024-11-05 16:02:24.257859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:52.001 [2024-11-05 16:02:24.258016] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:32:52.001 [2024-11-05 16:02:24.258030] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:52.001 [2024-11-05 16:02:24.258266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:32:52.001 [2024-11-05 16:02:24.258417] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:32:52.001 [2024-11-05 16:02:24.258431] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:32:52.001 [2024-11-05 16:02:24.258572] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:52.001 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.001 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:52.001 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:52.001 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:52.001 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:52.001 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:52.001 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:52.001 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:52.001 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:52.001 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:52.001 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:52.001 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:52.001 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:52.001 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.001 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:52.001 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.001 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:52.001 "name": "raid_bdev1", 00:32:52.001 "uuid": "88003916-3ff9-4d3e-8e9c-7453d8571545", 00:32:52.001 "strip_size_kb": 0, 00:32:52.001 "state": "online", 00:32:52.001 "raid_level": "raid1", 00:32:52.001 "superblock": true, 00:32:52.001 "num_base_bdevs": 2, 00:32:52.001 "num_base_bdevs_discovered": 2, 00:32:52.001 "num_base_bdevs_operational": 2, 00:32:52.001 "base_bdevs_list": [ 00:32:52.001 { 00:32:52.001 "name": "BaseBdev1", 00:32:52.001 "uuid": "bc16d190-4d1c-5884-8ab3-93c01e6253ec", 00:32:52.001 "is_configured": true, 00:32:52.001 "data_offset": 256, 00:32:52.001 "data_size": 7936 00:32:52.001 }, 00:32:52.001 { 00:32:52.001 "name": "BaseBdev2", 00:32:52.001 "uuid": "ade05357-68b5-5c28-8205-b6a1555d1872", 00:32:52.001 "is_configured": true, 00:32:52.001 "data_offset": 256, 00:32:52.001 "data_size": 7936 00:32:52.001 } 00:32:52.001 ] 00:32:52.001 }' 00:32:52.001 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:52.001 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:52.260 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:52.260 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.260 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:52.260 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:32:52.260 [2024-11-05 16:02:24.572389] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:52.260 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.260 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:32:52.260 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:52.260 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.260 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:52.260 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:32:52.260 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.260 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:32:52.260 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:32:52.260 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:32:52.260 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:32:52.260 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:32:52.260 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:32:52.260 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:32:52.260 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:52.260 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:32:52.260 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:52.260 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:32:52.260 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:52.260 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:52.260 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:32:52.519 [2024-11-05 16:02:24.816197] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:32:52.519 /dev/nbd0 00:32:52.519 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:52.519 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:52.519 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:32:52.519 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:32:52.519 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:32:52.519 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:32:52.519 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:32:52.519 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:32:52.519 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:32:52.519 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:32:52.519 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:52.519 1+0 records in 00:32:52.519 1+0 records out 00:32:52.519 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000636742 s, 6.4 MB/s 00:32:52.519 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:52.519 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:32:52.519 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:52.519 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:32:52.519 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:32:52.519 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:52.519 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:52.519 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:32:52.519 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:32:52.519 16:02:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:32:53.454 7936+0 records in 00:32:53.454 7936+0 records out 00:32:53.454 32505856 bytes (33 MB, 31 MiB) copied, 0.72075 s, 45.1 MB/s 00:32:53.454 16:02:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:32:53.454 16:02:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:32:53.454 16:02:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:53.454 16:02:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:53.454 16:02:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:32:53.454 16:02:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:53.454 16:02:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:32:53.454 16:02:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:53.454 [2024-11-05 16:02:25.808511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:53.454 16:02:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:53.454 16:02:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:53.454 16:02:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:53.454 16:02:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:53.454 16:02:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:53.454 16:02:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:32:53.454 16:02:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:32:53.454 16:02:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:32:53.455 16:02:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.455 16:02:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:53.455 [2024-11-05 16:02:25.824599] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:53.455 16:02:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.455 16:02:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:53.455 16:02:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:53.455 16:02:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:53.455 16:02:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:53.455 16:02:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:53.455 16:02:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:53.455 16:02:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:53.455 16:02:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:53.455 16:02:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:53.455 16:02:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:53.455 16:02:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:53.455 16:02:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:53.455 16:02:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.455 16:02:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:53.455 16:02:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.713 16:02:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:53.713 "name": "raid_bdev1", 00:32:53.713 "uuid": "88003916-3ff9-4d3e-8e9c-7453d8571545", 00:32:53.713 "strip_size_kb": 0, 00:32:53.713 "state": "online", 00:32:53.713 "raid_level": "raid1", 00:32:53.713 "superblock": true, 00:32:53.713 "num_base_bdevs": 2, 00:32:53.713 "num_base_bdevs_discovered": 1, 00:32:53.713 "num_base_bdevs_operational": 1, 00:32:53.713 "base_bdevs_list": [ 00:32:53.713 { 00:32:53.713 "name": null, 00:32:53.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:53.714 "is_configured": false, 00:32:53.714 "data_offset": 0, 00:32:53.714 "data_size": 7936 00:32:53.714 }, 00:32:53.714 { 00:32:53.714 "name": "BaseBdev2", 00:32:53.714 "uuid": "ade05357-68b5-5c28-8205-b6a1555d1872", 00:32:53.714 "is_configured": true, 00:32:53.714 "data_offset": 256, 00:32:53.714 "data_size": 7936 00:32:53.714 } 00:32:53.714 ] 00:32:53.714 }' 00:32:53.714 16:02:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:53.714 16:02:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:53.973 16:02:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:32:53.973 16:02:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.973 16:02:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:53.973 [2024-11-05 16:02:26.168690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:53.973 [2024-11-05 16:02:26.180177] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:32:53.973 16:02:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.973 16:02:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:32:53.973 [2024-11-05 16:02:26.182024] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:54.908 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:54.908 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:54.908 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:54.908 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:54.908 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:54.908 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:54.908 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:54.908 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:54.908 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:54.908 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:54.908 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:54.908 "name": "raid_bdev1", 00:32:54.908 "uuid": "88003916-3ff9-4d3e-8e9c-7453d8571545", 00:32:54.908 "strip_size_kb": 0, 00:32:54.908 "state": "online", 00:32:54.908 "raid_level": "raid1", 00:32:54.908 "superblock": true, 00:32:54.908 "num_base_bdevs": 2, 00:32:54.908 "num_base_bdevs_discovered": 2, 00:32:54.908 "num_base_bdevs_operational": 2, 00:32:54.908 "process": { 00:32:54.908 "type": "rebuild", 00:32:54.908 "target": "spare", 00:32:54.908 "progress": { 00:32:54.908 "blocks": 2560, 00:32:54.908 "percent": 32 00:32:54.908 } 00:32:54.908 }, 00:32:54.908 "base_bdevs_list": [ 00:32:54.908 { 00:32:54.908 "name": "spare", 00:32:54.908 "uuid": "051fabee-b457-51c9-9419-c349000bade7", 00:32:54.908 "is_configured": true, 00:32:54.908 "data_offset": 256, 00:32:54.908 "data_size": 7936 00:32:54.908 }, 00:32:54.908 { 00:32:54.908 "name": "BaseBdev2", 00:32:54.908 "uuid": "ade05357-68b5-5c28-8205-b6a1555d1872", 00:32:54.908 "is_configured": true, 00:32:54.908 "data_offset": 256, 00:32:54.908 "data_size": 7936 00:32:54.908 } 00:32:54.908 ] 00:32:54.908 }' 00:32:54.908 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:54.908 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:54.908 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:54.908 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:54.908 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:32:54.908 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:54.908 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:54.908 [2024-11-05 16:02:27.283974] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:54.908 [2024-11-05 16:02:27.286950] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:54.908 [2024-11-05 16:02:27.286997] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:54.908 [2024-11-05 16:02:27.287009] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:54.908 [2024-11-05 16:02:27.287020] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:54.909 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:54.909 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:54.909 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:54.909 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:54.909 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:54.909 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:54.909 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:54.909 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:54.909 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:54.909 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:54.909 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:54.909 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:54.909 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:54.909 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:54.909 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:54.909 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.167 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:55.167 "name": "raid_bdev1", 00:32:55.167 "uuid": "88003916-3ff9-4d3e-8e9c-7453d8571545", 00:32:55.167 "strip_size_kb": 0, 00:32:55.167 "state": "online", 00:32:55.167 "raid_level": "raid1", 00:32:55.167 "superblock": true, 00:32:55.167 "num_base_bdevs": 2, 00:32:55.167 "num_base_bdevs_discovered": 1, 00:32:55.167 "num_base_bdevs_operational": 1, 00:32:55.167 "base_bdevs_list": [ 00:32:55.167 { 00:32:55.167 "name": null, 00:32:55.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:55.167 "is_configured": false, 00:32:55.167 "data_offset": 0, 00:32:55.167 "data_size": 7936 00:32:55.167 }, 00:32:55.167 { 00:32:55.167 "name": "BaseBdev2", 00:32:55.167 "uuid": "ade05357-68b5-5c28-8205-b6a1555d1872", 00:32:55.167 "is_configured": true, 00:32:55.167 "data_offset": 256, 00:32:55.167 "data_size": 7936 00:32:55.167 } 00:32:55.167 ] 00:32:55.167 }' 00:32:55.167 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:55.167 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:55.426 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:55.426 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:55.426 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:55.426 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:55.426 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:55.426 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:55.426 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.426 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:55.426 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:55.426 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.426 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:55.426 "name": "raid_bdev1", 00:32:55.426 "uuid": "88003916-3ff9-4d3e-8e9c-7453d8571545", 00:32:55.426 "strip_size_kb": 0, 00:32:55.426 "state": "online", 00:32:55.426 "raid_level": "raid1", 00:32:55.426 "superblock": true, 00:32:55.426 "num_base_bdevs": 2, 00:32:55.426 "num_base_bdevs_discovered": 1, 00:32:55.427 "num_base_bdevs_operational": 1, 00:32:55.427 "base_bdevs_list": [ 00:32:55.427 { 00:32:55.427 "name": null, 00:32:55.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:55.427 "is_configured": false, 00:32:55.427 "data_offset": 0, 00:32:55.427 "data_size": 7936 00:32:55.427 }, 00:32:55.427 { 00:32:55.427 "name": "BaseBdev2", 00:32:55.427 "uuid": "ade05357-68b5-5c28-8205-b6a1555d1872", 00:32:55.427 "is_configured": true, 00:32:55.427 "data_offset": 256, 00:32:55.427 "data_size": 7936 00:32:55.427 } 00:32:55.427 ] 00:32:55.427 }' 00:32:55.427 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:55.427 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:55.427 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:55.427 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:55.427 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:32:55.427 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.427 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:55.427 [2024-11-05 16:02:27.717124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:55.427 [2024-11-05 16:02:27.725820] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:32:55.427 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.427 16:02:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:32:55.427 [2024-11-05 16:02:27.727321] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:56.364 16:02:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:56.364 16:02:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:56.364 16:02:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:56.364 16:02:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:56.364 16:02:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:56.364 16:02:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:56.364 16:02:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.365 16:02:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:56.365 16:02:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:56.365 16:02:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.365 16:02:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:56.365 "name": "raid_bdev1", 00:32:56.365 "uuid": "88003916-3ff9-4d3e-8e9c-7453d8571545", 00:32:56.365 "strip_size_kb": 0, 00:32:56.365 "state": "online", 00:32:56.365 "raid_level": "raid1", 00:32:56.365 "superblock": true, 00:32:56.365 "num_base_bdevs": 2, 00:32:56.365 "num_base_bdevs_discovered": 2, 00:32:56.365 "num_base_bdevs_operational": 2, 00:32:56.365 "process": { 00:32:56.365 "type": "rebuild", 00:32:56.365 "target": "spare", 00:32:56.365 "progress": { 00:32:56.365 "blocks": 2560, 00:32:56.365 "percent": 32 00:32:56.365 } 00:32:56.365 }, 00:32:56.365 "base_bdevs_list": [ 00:32:56.365 { 00:32:56.365 "name": "spare", 00:32:56.365 "uuid": "051fabee-b457-51c9-9419-c349000bade7", 00:32:56.365 "is_configured": true, 00:32:56.365 "data_offset": 256, 00:32:56.365 "data_size": 7936 00:32:56.365 }, 00:32:56.365 { 00:32:56.365 "name": "BaseBdev2", 00:32:56.365 "uuid": "ade05357-68b5-5c28-8205-b6a1555d1872", 00:32:56.365 "is_configured": true, 00:32:56.365 "data_offset": 256, 00:32:56.365 "data_size": 7936 00:32:56.365 } 00:32:56.365 ] 00:32:56.365 }' 00:32:56.365 16:02:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:56.624 16:02:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:56.624 16:02:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:56.624 16:02:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:56.624 16:02:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:32:56.624 16:02:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:32:56.624 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:32:56.624 16:02:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:32:56.624 16:02:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:32:56.624 16:02:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:32:56.624 16:02:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=524 00:32:56.624 16:02:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:56.624 16:02:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:56.624 16:02:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:56.624 16:02:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:56.624 16:02:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:56.624 16:02:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:56.624 16:02:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:56.624 16:02:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:56.625 16:02:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.625 16:02:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:56.625 16:02:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.625 16:02:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:56.625 "name": "raid_bdev1", 00:32:56.625 "uuid": "88003916-3ff9-4d3e-8e9c-7453d8571545", 00:32:56.625 "strip_size_kb": 0, 00:32:56.625 "state": "online", 00:32:56.625 "raid_level": "raid1", 00:32:56.625 "superblock": true, 00:32:56.625 "num_base_bdevs": 2, 00:32:56.625 "num_base_bdevs_discovered": 2, 00:32:56.625 "num_base_bdevs_operational": 2, 00:32:56.625 "process": { 00:32:56.625 "type": "rebuild", 00:32:56.625 "target": "spare", 00:32:56.625 "progress": { 00:32:56.625 "blocks": 2560, 00:32:56.625 "percent": 32 00:32:56.625 } 00:32:56.625 }, 00:32:56.625 "base_bdevs_list": [ 00:32:56.625 { 00:32:56.625 "name": "spare", 00:32:56.625 "uuid": "051fabee-b457-51c9-9419-c349000bade7", 00:32:56.625 "is_configured": true, 00:32:56.625 "data_offset": 256, 00:32:56.625 "data_size": 7936 00:32:56.625 }, 00:32:56.625 { 00:32:56.625 "name": "BaseBdev2", 00:32:56.625 "uuid": "ade05357-68b5-5c28-8205-b6a1555d1872", 00:32:56.625 "is_configured": true, 00:32:56.625 "data_offset": 256, 00:32:56.625 "data_size": 7936 00:32:56.625 } 00:32:56.625 ] 00:32:56.625 }' 00:32:56.625 16:02:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:56.625 16:02:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:56.625 16:02:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:56.625 16:02:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:56.625 16:02:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:57.561 16:02:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:57.561 16:02:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:57.561 16:02:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:57.561 16:02:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:57.561 16:02:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:57.561 16:02:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:57.561 16:02:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:57.561 16:02:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:57.561 16:02:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.561 16:02:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:57.561 16:02:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.561 16:02:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:57.561 "name": "raid_bdev1", 00:32:57.561 "uuid": "88003916-3ff9-4d3e-8e9c-7453d8571545", 00:32:57.561 "strip_size_kb": 0, 00:32:57.561 "state": "online", 00:32:57.561 "raid_level": "raid1", 00:32:57.561 "superblock": true, 00:32:57.561 "num_base_bdevs": 2, 00:32:57.561 "num_base_bdevs_discovered": 2, 00:32:57.561 "num_base_bdevs_operational": 2, 00:32:57.561 "process": { 00:32:57.561 "type": "rebuild", 00:32:57.561 "target": "spare", 00:32:57.561 "progress": { 00:32:57.561 "blocks": 5376, 00:32:57.561 "percent": 67 00:32:57.561 } 00:32:57.561 }, 00:32:57.561 "base_bdevs_list": [ 00:32:57.561 { 00:32:57.561 "name": "spare", 00:32:57.561 "uuid": "051fabee-b457-51c9-9419-c349000bade7", 00:32:57.561 "is_configured": true, 00:32:57.561 "data_offset": 256, 00:32:57.561 "data_size": 7936 00:32:57.561 }, 00:32:57.561 { 00:32:57.561 "name": "BaseBdev2", 00:32:57.561 "uuid": "ade05357-68b5-5c28-8205-b6a1555d1872", 00:32:57.561 "is_configured": true, 00:32:57.561 "data_offset": 256, 00:32:57.561 "data_size": 7936 00:32:57.561 } 00:32:57.561 ] 00:32:57.561 }' 00:32:57.561 16:02:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:57.561 16:02:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:57.561 16:02:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:57.820 16:02:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:57.820 16:02:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:58.756 [2024-11-05 16:02:30.839917] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:32:58.756 [2024-11-05 16:02:30.840117] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:32:58.756 [2024-11-05 16:02:30.840203] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:58.756 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:58.757 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:58.757 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:58.757 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:58.757 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:58.757 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:58.757 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:58.757 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:58.757 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.757 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:58.757 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.757 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:58.757 "name": "raid_bdev1", 00:32:58.757 "uuid": "88003916-3ff9-4d3e-8e9c-7453d8571545", 00:32:58.757 "strip_size_kb": 0, 00:32:58.757 "state": "online", 00:32:58.757 "raid_level": "raid1", 00:32:58.757 "superblock": true, 00:32:58.757 "num_base_bdevs": 2, 00:32:58.757 "num_base_bdevs_discovered": 2, 00:32:58.757 "num_base_bdevs_operational": 2, 00:32:58.757 "base_bdevs_list": [ 00:32:58.757 { 00:32:58.757 "name": "spare", 00:32:58.757 "uuid": "051fabee-b457-51c9-9419-c349000bade7", 00:32:58.757 "is_configured": true, 00:32:58.757 "data_offset": 256, 00:32:58.757 "data_size": 7936 00:32:58.757 }, 00:32:58.757 { 00:32:58.757 "name": "BaseBdev2", 00:32:58.757 "uuid": "ade05357-68b5-5c28-8205-b6a1555d1872", 00:32:58.757 "is_configured": true, 00:32:58.757 "data_offset": 256, 00:32:58.757 "data_size": 7936 00:32:58.757 } 00:32:58.757 ] 00:32:58.757 }' 00:32:58.757 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:58.757 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:32:58.757 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:58.757 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:32:58.757 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:32:58.757 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:58.757 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:58.757 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:58.757 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:58.757 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:58.757 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:58.757 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:58.757 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.757 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:58.757 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.757 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:58.757 "name": "raid_bdev1", 00:32:58.757 "uuid": "88003916-3ff9-4d3e-8e9c-7453d8571545", 00:32:58.757 "strip_size_kb": 0, 00:32:58.757 "state": "online", 00:32:58.757 "raid_level": "raid1", 00:32:58.757 "superblock": true, 00:32:58.757 "num_base_bdevs": 2, 00:32:58.757 "num_base_bdevs_discovered": 2, 00:32:58.757 "num_base_bdevs_operational": 2, 00:32:58.757 "base_bdevs_list": [ 00:32:58.757 { 00:32:58.757 "name": "spare", 00:32:58.757 "uuid": "051fabee-b457-51c9-9419-c349000bade7", 00:32:58.757 "is_configured": true, 00:32:58.757 "data_offset": 256, 00:32:58.757 "data_size": 7936 00:32:58.757 }, 00:32:58.757 { 00:32:58.757 "name": "BaseBdev2", 00:32:58.757 "uuid": "ade05357-68b5-5c28-8205-b6a1555d1872", 00:32:58.757 "is_configured": true, 00:32:58.757 "data_offset": 256, 00:32:58.757 "data_size": 7936 00:32:58.757 } 00:32:58.757 ] 00:32:58.757 }' 00:32:58.757 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:59.026 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:59.026 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:59.026 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:59.026 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:59.026 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:59.026 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:59.026 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:59.026 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:59.026 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:59.026 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:59.026 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:59.026 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:59.026 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:59.026 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:59.026 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:59.026 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.026 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:59.026 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.026 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:59.026 "name": "raid_bdev1", 00:32:59.026 "uuid": "88003916-3ff9-4d3e-8e9c-7453d8571545", 00:32:59.026 "strip_size_kb": 0, 00:32:59.026 "state": "online", 00:32:59.026 "raid_level": "raid1", 00:32:59.026 "superblock": true, 00:32:59.026 "num_base_bdevs": 2, 00:32:59.026 "num_base_bdevs_discovered": 2, 00:32:59.026 "num_base_bdevs_operational": 2, 00:32:59.026 "base_bdevs_list": [ 00:32:59.026 { 00:32:59.026 "name": "spare", 00:32:59.026 "uuid": "051fabee-b457-51c9-9419-c349000bade7", 00:32:59.026 "is_configured": true, 00:32:59.026 "data_offset": 256, 00:32:59.026 "data_size": 7936 00:32:59.026 }, 00:32:59.026 { 00:32:59.026 "name": "BaseBdev2", 00:32:59.026 "uuid": "ade05357-68b5-5c28-8205-b6a1555d1872", 00:32:59.026 "is_configured": true, 00:32:59.026 "data_offset": 256, 00:32:59.026 "data_size": 7936 00:32:59.026 } 00:32:59.026 ] 00:32:59.026 }' 00:32:59.027 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:59.027 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:59.285 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:59.285 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.285 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:59.285 [2024-11-05 16:02:31.522042] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:59.285 [2024-11-05 16:02:31.522071] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:59.285 [2024-11-05 16:02:31.522129] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:59.285 [2024-11-05 16:02:31.522182] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:59.285 [2024-11-05 16:02:31.522190] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:32:59.285 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.285 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:59.285 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.285 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:59.285 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:32:59.285 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.285 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:32:59.285 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:32:59.285 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:32:59.285 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:32:59.285 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:32:59.285 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:32:59.285 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:59.285 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:59.285 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:59.285 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:32:59.285 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:59.285 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:59.285 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:32:59.544 /dev/nbd0 00:32:59.544 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:59.544 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:59.544 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:32:59.544 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:32:59.544 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:32:59.544 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:32:59.544 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:32:59.544 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:32:59.544 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:32:59.544 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:32:59.544 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:59.544 1+0 records in 00:32:59.544 1+0 records out 00:32:59.544 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184288 s, 22.2 MB/s 00:32:59.544 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:59.544 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:32:59.544 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:59.544 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:32:59.544 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:32:59.544 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:59.544 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:59.544 16:02:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:32:59.802 /dev/nbd1 00:32:59.802 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:59.802 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:59.802 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:32:59.802 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:32:59.802 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:32:59.802 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:32:59.802 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:32:59.802 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:32:59.802 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:32:59.802 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:32:59.802 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:59.802 1+0 records in 00:32:59.802 1+0 records out 00:32:59.802 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229628 s, 17.8 MB/s 00:32:59.802 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:59.802 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:32:59.802 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:59.802 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:32:59.802 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:32:59.802 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:59.802 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:59.802 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:32:59.802 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:32:59.803 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:32:59.803 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:59.803 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:59.803 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:32:59.803 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:59.803 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:33:00.061 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:00.061 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:00.061 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:00.061 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:00.061 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:00.061 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:00.061 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:33:00.061 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:33:00.061 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:00.061 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:33:00.319 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:00.319 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:00.319 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:00.319 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:00.319 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:00.319 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:00.319 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:33:00.319 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:33:00.319 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:33:00.319 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:33:00.319 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.319 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:00.319 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.319 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:33:00.319 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.319 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:00.320 [2024-11-05 16:02:32.566282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:00.320 [2024-11-05 16:02:32.566335] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:00.320 [2024-11-05 16:02:32.566355] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:33:00.320 [2024-11-05 16:02:32.566364] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:00.320 [2024-11-05 16:02:32.568214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:00.320 [2024-11-05 16:02:32.568246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:00.320 [2024-11-05 16:02:32.568326] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:33:00.320 [2024-11-05 16:02:32.568366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:00.320 [2024-11-05 16:02:32.568480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:00.320 spare 00:33:00.320 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.320 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:33:00.320 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.320 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:00.320 [2024-11-05 16:02:32.668558] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:33:00.320 [2024-11-05 16:02:32.668600] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:33:00.320 [2024-11-05 16:02:32.668874] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:33:00.320 [2024-11-05 16:02:32.669024] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:33:00.320 [2024-11-05 16:02:32.669033] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:33:00.320 [2024-11-05 16:02:32.669174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:00.320 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.320 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:00.320 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:00.320 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:00.320 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:00.320 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:00.320 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:00.320 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:00.320 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:00.320 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:00.320 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:00.320 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:00.320 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.320 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:00.320 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:00.320 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.320 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:00.320 "name": "raid_bdev1", 00:33:00.320 "uuid": "88003916-3ff9-4d3e-8e9c-7453d8571545", 00:33:00.320 "strip_size_kb": 0, 00:33:00.320 "state": "online", 00:33:00.320 "raid_level": "raid1", 00:33:00.320 "superblock": true, 00:33:00.320 "num_base_bdevs": 2, 00:33:00.320 "num_base_bdevs_discovered": 2, 00:33:00.320 "num_base_bdevs_operational": 2, 00:33:00.320 "base_bdevs_list": [ 00:33:00.320 { 00:33:00.320 "name": "spare", 00:33:00.320 "uuid": "051fabee-b457-51c9-9419-c349000bade7", 00:33:00.320 "is_configured": true, 00:33:00.320 "data_offset": 256, 00:33:00.320 "data_size": 7936 00:33:00.320 }, 00:33:00.320 { 00:33:00.320 "name": "BaseBdev2", 00:33:00.320 "uuid": "ade05357-68b5-5c28-8205-b6a1555d1872", 00:33:00.320 "is_configured": true, 00:33:00.320 "data_offset": 256, 00:33:00.320 "data_size": 7936 00:33:00.320 } 00:33:00.320 ] 00:33:00.320 }' 00:33:00.320 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:00.320 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:00.579 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:00.579 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:00.579 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:00.579 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:00.579 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:00.579 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:00.579 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.579 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:00.579 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:00.579 16:02:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.836 16:02:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:00.836 "name": "raid_bdev1", 00:33:00.836 "uuid": "88003916-3ff9-4d3e-8e9c-7453d8571545", 00:33:00.836 "strip_size_kb": 0, 00:33:00.836 "state": "online", 00:33:00.836 "raid_level": "raid1", 00:33:00.836 "superblock": true, 00:33:00.836 "num_base_bdevs": 2, 00:33:00.836 "num_base_bdevs_discovered": 2, 00:33:00.836 "num_base_bdevs_operational": 2, 00:33:00.836 "base_bdevs_list": [ 00:33:00.836 { 00:33:00.836 "name": "spare", 00:33:00.836 "uuid": "051fabee-b457-51c9-9419-c349000bade7", 00:33:00.836 "is_configured": true, 00:33:00.836 "data_offset": 256, 00:33:00.836 "data_size": 7936 00:33:00.836 }, 00:33:00.836 { 00:33:00.836 "name": "BaseBdev2", 00:33:00.836 "uuid": "ade05357-68b5-5c28-8205-b6a1555d1872", 00:33:00.836 "is_configured": true, 00:33:00.836 "data_offset": 256, 00:33:00.836 "data_size": 7936 00:33:00.836 } 00:33:00.836 ] 00:33:00.836 }' 00:33:00.836 16:02:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:00.836 16:02:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:00.836 16:02:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:00.836 16:02:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:00.836 16:02:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:33:00.836 16:02:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:00.836 16:02:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.836 16:02:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:00.836 16:02:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.836 16:02:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:33:00.836 16:02:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:33:00.836 16:02:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.836 16:02:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:00.836 [2024-11-05 16:02:33.106412] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:00.836 16:02:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.836 16:02:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:00.836 16:02:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:00.836 16:02:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:00.836 16:02:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:00.836 16:02:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:00.836 16:02:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:00.836 16:02:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:00.837 16:02:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:00.837 16:02:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:00.837 16:02:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:00.837 16:02:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:00.837 16:02:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.837 16:02:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:00.837 16:02:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:00.837 16:02:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.837 16:02:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:00.837 "name": "raid_bdev1", 00:33:00.837 "uuid": "88003916-3ff9-4d3e-8e9c-7453d8571545", 00:33:00.837 "strip_size_kb": 0, 00:33:00.837 "state": "online", 00:33:00.837 "raid_level": "raid1", 00:33:00.837 "superblock": true, 00:33:00.837 "num_base_bdevs": 2, 00:33:00.837 "num_base_bdevs_discovered": 1, 00:33:00.837 "num_base_bdevs_operational": 1, 00:33:00.837 "base_bdevs_list": [ 00:33:00.837 { 00:33:00.837 "name": null, 00:33:00.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:00.837 "is_configured": false, 00:33:00.837 "data_offset": 0, 00:33:00.837 "data_size": 7936 00:33:00.837 }, 00:33:00.837 { 00:33:00.837 "name": "BaseBdev2", 00:33:00.837 "uuid": "ade05357-68b5-5c28-8205-b6a1555d1872", 00:33:00.837 "is_configured": true, 00:33:00.837 "data_offset": 256, 00:33:00.837 "data_size": 7936 00:33:00.837 } 00:33:00.837 ] 00:33:00.837 }' 00:33:00.837 16:02:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:00.837 16:02:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:01.095 16:02:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:33:01.095 16:02:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.095 16:02:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:01.095 [2024-11-05 16:02:33.434502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:01.095 [2024-11-05 16:02:33.434655] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:33:01.095 [2024-11-05 16:02:33.434669] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:33:01.095 [2024-11-05 16:02:33.434699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:01.095 [2024-11-05 16:02:33.443012] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:33:01.095 16:02:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.095 16:02:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:33:01.095 [2024-11-05 16:02:33.444512] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:02.470 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:02.470 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:02.470 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:02.470 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:02.470 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:02.470 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:02.470 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:02.470 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.470 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:02.470 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.470 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:02.470 "name": "raid_bdev1", 00:33:02.470 "uuid": "88003916-3ff9-4d3e-8e9c-7453d8571545", 00:33:02.470 "strip_size_kb": 0, 00:33:02.470 "state": "online", 00:33:02.470 "raid_level": "raid1", 00:33:02.470 "superblock": true, 00:33:02.470 "num_base_bdevs": 2, 00:33:02.470 "num_base_bdevs_discovered": 2, 00:33:02.470 "num_base_bdevs_operational": 2, 00:33:02.470 "process": { 00:33:02.470 "type": "rebuild", 00:33:02.470 "target": "spare", 00:33:02.470 "progress": { 00:33:02.470 "blocks": 2560, 00:33:02.470 "percent": 32 00:33:02.470 } 00:33:02.470 }, 00:33:02.470 "base_bdevs_list": [ 00:33:02.470 { 00:33:02.470 "name": "spare", 00:33:02.470 "uuid": "051fabee-b457-51c9-9419-c349000bade7", 00:33:02.470 "is_configured": true, 00:33:02.470 "data_offset": 256, 00:33:02.470 "data_size": 7936 00:33:02.470 }, 00:33:02.470 { 00:33:02.470 "name": "BaseBdev2", 00:33:02.470 "uuid": "ade05357-68b5-5c28-8205-b6a1555d1872", 00:33:02.470 "is_configured": true, 00:33:02.470 "data_offset": 256, 00:33:02.470 "data_size": 7936 00:33:02.470 } 00:33:02.470 ] 00:33:02.470 }' 00:33:02.470 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:02.470 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:02.470 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:02.470 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:02.470 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:33:02.470 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.470 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:02.470 [2024-11-05 16:02:34.543035] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:02.470 [2024-11-05 16:02:34.549275] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:02.470 [2024-11-05 16:02:34.549329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:02.470 [2024-11-05 16:02:34.549341] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:02.470 [2024-11-05 16:02:34.549348] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:02.470 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.470 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:02.470 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:02.470 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:02.470 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:02.470 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:02.470 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:02.470 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:02.470 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:02.470 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:02.470 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:02.470 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:02.470 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:02.470 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.470 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:02.470 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.470 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:02.470 "name": "raid_bdev1", 00:33:02.471 "uuid": "88003916-3ff9-4d3e-8e9c-7453d8571545", 00:33:02.471 "strip_size_kb": 0, 00:33:02.471 "state": "online", 00:33:02.471 "raid_level": "raid1", 00:33:02.471 "superblock": true, 00:33:02.471 "num_base_bdevs": 2, 00:33:02.471 "num_base_bdevs_discovered": 1, 00:33:02.471 "num_base_bdevs_operational": 1, 00:33:02.471 "base_bdevs_list": [ 00:33:02.471 { 00:33:02.471 "name": null, 00:33:02.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:02.471 "is_configured": false, 00:33:02.471 "data_offset": 0, 00:33:02.471 "data_size": 7936 00:33:02.471 }, 00:33:02.471 { 00:33:02.471 "name": "BaseBdev2", 00:33:02.471 "uuid": "ade05357-68b5-5c28-8205-b6a1555d1872", 00:33:02.471 "is_configured": true, 00:33:02.471 "data_offset": 256, 00:33:02.471 "data_size": 7936 00:33:02.471 } 00:33:02.471 ] 00:33:02.471 }' 00:33:02.471 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:02.471 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:02.729 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:33:02.729 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.729 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:02.729 [2024-11-05 16:02:34.891215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:02.729 [2024-11-05 16:02:34.891267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:02.729 [2024-11-05 16:02:34.891284] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:33:02.729 [2024-11-05 16:02:34.891293] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:02.729 [2024-11-05 16:02:34.891641] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:02.729 [2024-11-05 16:02:34.891664] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:02.729 [2024-11-05 16:02:34.891732] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:33:02.729 [2024-11-05 16:02:34.891743] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:33:02.729 [2024-11-05 16:02:34.891751] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:33:02.729 [2024-11-05 16:02:34.891770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:02.729 [2024-11-05 16:02:34.900012] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:33:02.729 spare 00:33:02.729 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.729 16:02:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:33:02.729 [2024-11-05 16:02:34.901488] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:03.703 16:02:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:03.703 16:02:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:03.704 16:02:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:03.704 16:02:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:03.704 16:02:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:03.704 16:02:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:03.704 16:02:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:03.704 16:02:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.704 16:02:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:03.704 16:02:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.704 16:02:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:03.704 "name": "raid_bdev1", 00:33:03.704 "uuid": "88003916-3ff9-4d3e-8e9c-7453d8571545", 00:33:03.704 "strip_size_kb": 0, 00:33:03.704 "state": "online", 00:33:03.704 "raid_level": "raid1", 00:33:03.704 "superblock": true, 00:33:03.704 "num_base_bdevs": 2, 00:33:03.704 "num_base_bdevs_discovered": 2, 00:33:03.704 "num_base_bdevs_operational": 2, 00:33:03.704 "process": { 00:33:03.704 "type": "rebuild", 00:33:03.704 "target": "spare", 00:33:03.704 "progress": { 00:33:03.704 "blocks": 2560, 00:33:03.704 "percent": 32 00:33:03.704 } 00:33:03.704 }, 00:33:03.704 "base_bdevs_list": [ 00:33:03.704 { 00:33:03.704 "name": "spare", 00:33:03.704 "uuid": "051fabee-b457-51c9-9419-c349000bade7", 00:33:03.704 "is_configured": true, 00:33:03.704 "data_offset": 256, 00:33:03.704 "data_size": 7936 00:33:03.704 }, 00:33:03.704 { 00:33:03.704 "name": "BaseBdev2", 00:33:03.704 "uuid": "ade05357-68b5-5c28-8205-b6a1555d1872", 00:33:03.704 "is_configured": true, 00:33:03.704 "data_offset": 256, 00:33:03.704 "data_size": 7936 00:33:03.704 } 00:33:03.704 ] 00:33:03.704 }' 00:33:03.704 16:02:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:03.704 16:02:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:03.704 16:02:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:03.704 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:03.704 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:33:03.704 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.704 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:03.704 [2024-11-05 16:02:36.012002] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:03.704 [2024-11-05 16:02:36.106595] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:03.704 [2024-11-05 16:02:36.106649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:03.704 [2024-11-05 16:02:36.106663] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:03.704 [2024-11-05 16:02:36.106670] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:03.962 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.962 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:03.962 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:03.962 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:03.962 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:03.962 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:03.962 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:03.962 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:03.962 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:03.962 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:03.962 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:03.962 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:03.962 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.962 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:03.962 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:03.962 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.962 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:03.962 "name": "raid_bdev1", 00:33:03.962 "uuid": "88003916-3ff9-4d3e-8e9c-7453d8571545", 00:33:03.962 "strip_size_kb": 0, 00:33:03.962 "state": "online", 00:33:03.962 "raid_level": "raid1", 00:33:03.962 "superblock": true, 00:33:03.962 "num_base_bdevs": 2, 00:33:03.962 "num_base_bdevs_discovered": 1, 00:33:03.962 "num_base_bdevs_operational": 1, 00:33:03.962 "base_bdevs_list": [ 00:33:03.962 { 00:33:03.962 "name": null, 00:33:03.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:03.962 "is_configured": false, 00:33:03.962 "data_offset": 0, 00:33:03.962 "data_size": 7936 00:33:03.962 }, 00:33:03.962 { 00:33:03.962 "name": "BaseBdev2", 00:33:03.962 "uuid": "ade05357-68b5-5c28-8205-b6a1555d1872", 00:33:03.962 "is_configured": true, 00:33:03.962 "data_offset": 256, 00:33:03.962 "data_size": 7936 00:33:03.962 } 00:33:03.962 ] 00:33:03.962 }' 00:33:03.962 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:03.962 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:04.220 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:04.220 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:04.220 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:04.220 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:04.220 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:04.220 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:04.220 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:04.221 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.221 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:04.221 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.221 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:04.221 "name": "raid_bdev1", 00:33:04.221 "uuid": "88003916-3ff9-4d3e-8e9c-7453d8571545", 00:33:04.221 "strip_size_kb": 0, 00:33:04.221 "state": "online", 00:33:04.221 "raid_level": "raid1", 00:33:04.221 "superblock": true, 00:33:04.221 "num_base_bdevs": 2, 00:33:04.221 "num_base_bdevs_discovered": 1, 00:33:04.221 "num_base_bdevs_operational": 1, 00:33:04.221 "base_bdevs_list": [ 00:33:04.221 { 00:33:04.221 "name": null, 00:33:04.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:04.221 "is_configured": false, 00:33:04.221 "data_offset": 0, 00:33:04.221 "data_size": 7936 00:33:04.221 }, 00:33:04.221 { 00:33:04.221 "name": "BaseBdev2", 00:33:04.221 "uuid": "ade05357-68b5-5c28-8205-b6a1555d1872", 00:33:04.221 "is_configured": true, 00:33:04.221 "data_offset": 256, 00:33:04.221 "data_size": 7936 00:33:04.221 } 00:33:04.221 ] 00:33:04.221 }' 00:33:04.221 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:04.221 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:04.221 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:04.221 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:04.221 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:33:04.221 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.221 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:04.221 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.221 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:04.221 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.221 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:04.221 [2024-11-05 16:02:36.556904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:04.221 [2024-11-05 16:02:36.556947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:04.221 [2024-11-05 16:02:36.556964] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:33:04.221 [2024-11-05 16:02:36.556971] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:04.221 [2024-11-05 16:02:36.557309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:04.221 [2024-11-05 16:02:36.557324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:04.221 [2024-11-05 16:02:36.557384] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:33:04.221 [2024-11-05 16:02:36.557395] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:33:04.221 [2024-11-05 16:02:36.557403] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:04.221 [2024-11-05 16:02:36.557410] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:33:04.221 BaseBdev1 00:33:04.221 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.221 16:02:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:33:05.155 16:02:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:05.155 16:02:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:05.155 16:02:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:05.155 16:02:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:05.155 16:02:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:05.155 16:02:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:05.155 16:02:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:05.155 16:02:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:05.155 16:02:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:05.155 16:02:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:05.155 16:02:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:05.155 16:02:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:05.155 16:02:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.155 16:02:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:05.412 16:02:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.412 16:02:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:05.412 "name": "raid_bdev1", 00:33:05.412 "uuid": "88003916-3ff9-4d3e-8e9c-7453d8571545", 00:33:05.412 "strip_size_kb": 0, 00:33:05.412 "state": "online", 00:33:05.412 "raid_level": "raid1", 00:33:05.412 "superblock": true, 00:33:05.412 "num_base_bdevs": 2, 00:33:05.412 "num_base_bdevs_discovered": 1, 00:33:05.412 "num_base_bdevs_operational": 1, 00:33:05.412 "base_bdevs_list": [ 00:33:05.412 { 00:33:05.412 "name": null, 00:33:05.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:05.412 "is_configured": false, 00:33:05.412 "data_offset": 0, 00:33:05.412 "data_size": 7936 00:33:05.412 }, 00:33:05.412 { 00:33:05.412 "name": "BaseBdev2", 00:33:05.412 "uuid": "ade05357-68b5-5c28-8205-b6a1555d1872", 00:33:05.412 "is_configured": true, 00:33:05.412 "data_offset": 256, 00:33:05.412 "data_size": 7936 00:33:05.412 } 00:33:05.412 ] 00:33:05.412 }' 00:33:05.412 16:02:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:05.412 16:02:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:05.706 16:02:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:05.706 16:02:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:05.706 16:02:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:05.706 16:02:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:05.706 16:02:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:05.706 16:02:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:05.706 16:02:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:05.706 16:02:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.706 16:02:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:05.706 16:02:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.706 16:02:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:05.706 "name": "raid_bdev1", 00:33:05.706 "uuid": "88003916-3ff9-4d3e-8e9c-7453d8571545", 00:33:05.706 "strip_size_kb": 0, 00:33:05.706 "state": "online", 00:33:05.706 "raid_level": "raid1", 00:33:05.706 "superblock": true, 00:33:05.706 "num_base_bdevs": 2, 00:33:05.706 "num_base_bdevs_discovered": 1, 00:33:05.706 "num_base_bdevs_operational": 1, 00:33:05.706 "base_bdevs_list": [ 00:33:05.706 { 00:33:05.706 "name": null, 00:33:05.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:05.706 "is_configured": false, 00:33:05.706 "data_offset": 0, 00:33:05.706 "data_size": 7936 00:33:05.706 }, 00:33:05.706 { 00:33:05.706 "name": "BaseBdev2", 00:33:05.706 "uuid": "ade05357-68b5-5c28-8205-b6a1555d1872", 00:33:05.706 "is_configured": true, 00:33:05.706 "data_offset": 256, 00:33:05.706 "data_size": 7936 00:33:05.706 } 00:33:05.706 ] 00:33:05.706 }' 00:33:05.706 16:02:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:05.706 16:02:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:05.706 16:02:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:05.706 16:02:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:05.706 16:02:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:05.706 16:02:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:33:05.706 16:02:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:05.706 16:02:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:05.706 16:02:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:05.706 16:02:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:05.706 16:02:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:05.706 16:02:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:05.706 16:02:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.706 16:02:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:05.706 [2024-11-05 16:02:38.017208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:05.706 [2024-11-05 16:02:38.017323] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:33:05.706 [2024-11-05 16:02:38.017334] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:05.706 request: 00:33:05.706 { 00:33:05.706 "base_bdev": "BaseBdev1", 00:33:05.706 "raid_bdev": "raid_bdev1", 00:33:05.706 "method": "bdev_raid_add_base_bdev", 00:33:05.706 "req_id": 1 00:33:05.706 } 00:33:05.706 Got JSON-RPC error response 00:33:05.706 response: 00:33:05.706 { 00:33:05.706 "code": -22, 00:33:05.706 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:33:05.706 } 00:33:05.706 16:02:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:05.706 16:02:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:33:05.706 16:02:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:05.706 16:02:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:05.706 16:02:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:05.706 16:02:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:33:06.640 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:06.640 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:06.640 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:06.640 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:06.640 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:06.640 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:06.640 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:06.640 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:06.640 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:06.640 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:06.640 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:06.640 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:06.640 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:06.640 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:06.640 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:06.902 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:06.902 "name": "raid_bdev1", 00:33:06.902 "uuid": "88003916-3ff9-4d3e-8e9c-7453d8571545", 00:33:06.902 "strip_size_kb": 0, 00:33:06.902 "state": "online", 00:33:06.902 "raid_level": "raid1", 00:33:06.902 "superblock": true, 00:33:06.902 "num_base_bdevs": 2, 00:33:06.902 "num_base_bdevs_discovered": 1, 00:33:06.902 "num_base_bdevs_operational": 1, 00:33:06.902 "base_bdevs_list": [ 00:33:06.902 { 00:33:06.902 "name": null, 00:33:06.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:06.902 "is_configured": false, 00:33:06.902 "data_offset": 0, 00:33:06.902 "data_size": 7936 00:33:06.902 }, 00:33:06.902 { 00:33:06.902 "name": "BaseBdev2", 00:33:06.902 "uuid": "ade05357-68b5-5c28-8205-b6a1555d1872", 00:33:06.902 "is_configured": true, 00:33:06.902 "data_offset": 256, 00:33:06.902 "data_size": 7936 00:33:06.902 } 00:33:06.902 ] 00:33:06.902 }' 00:33:06.902 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:06.902 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:07.161 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:07.161 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:07.161 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:07.161 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:07.161 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:07.161 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:07.161 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.161 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:07.161 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:07.161 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.161 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:07.161 "name": "raid_bdev1", 00:33:07.161 "uuid": "88003916-3ff9-4d3e-8e9c-7453d8571545", 00:33:07.161 "strip_size_kb": 0, 00:33:07.161 "state": "online", 00:33:07.161 "raid_level": "raid1", 00:33:07.161 "superblock": true, 00:33:07.161 "num_base_bdevs": 2, 00:33:07.161 "num_base_bdevs_discovered": 1, 00:33:07.161 "num_base_bdevs_operational": 1, 00:33:07.161 "base_bdevs_list": [ 00:33:07.161 { 00:33:07.161 "name": null, 00:33:07.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:07.161 "is_configured": false, 00:33:07.161 "data_offset": 0, 00:33:07.161 "data_size": 7936 00:33:07.161 }, 00:33:07.161 { 00:33:07.161 "name": "BaseBdev2", 00:33:07.161 "uuid": "ade05357-68b5-5c28-8205-b6a1555d1872", 00:33:07.161 "is_configured": true, 00:33:07.161 "data_offset": 256, 00:33:07.161 "data_size": 7936 00:33:07.161 } 00:33:07.161 ] 00:33:07.161 }' 00:33:07.161 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:07.161 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:07.161 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:07.161 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:07.161 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 83718 00:33:07.161 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 83718 ']' 00:33:07.161 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 83718 00:33:07.161 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:33:07.161 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:07.161 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83718 00:33:07.161 killing process with pid 83718 00:33:07.161 Received shutdown signal, test time was about 60.000000 seconds 00:33:07.161 00:33:07.161 Latency(us) 00:33:07.161 [2024-11-05T16:02:39.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.161 [2024-11-05T16:02:39.576Z] =================================================================================================================== 00:33:07.161 [2024-11-05T16:02:39.576Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:07.161 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:07.161 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:07.161 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83718' 00:33:07.161 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@971 -- # kill 83718 00:33:07.161 [2024-11-05 16:02:39.478191] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:07.161 16:02:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@976 -- # wait 83718 00:33:07.161 [2024-11-05 16:02:39.478284] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:07.161 [2024-11-05 16:02:39.478321] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:07.161 [2024-11-05 16:02:39.478330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:33:07.423 [2024-11-05 16:02:39.622217] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:07.993 16:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:33:07.993 00:33:07.993 real 0m16.958s 00:33:07.993 user 0m21.674s 00:33:07.993 sys 0m1.837s 00:33:07.993 ************************************ 00:33:07.993 END TEST raid_rebuild_test_sb_4k 00:33:07.993 ************************************ 00:33:07.993 16:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:07.993 16:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:07.993 16:02:40 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:33:07.993 16:02:40 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:33:07.993 16:02:40 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:33:07.993 16:02:40 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:07.993 16:02:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:07.993 ************************************ 00:33:07.993 START TEST raid_state_function_test_sb_md_separate 00:33:07.993 ************************************ 00:33:07.993 16:02:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:33:07.993 16:02:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:33:07.993 16:02:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:33:07.993 16:02:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:33:07.993 16:02:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:33:07.993 16:02:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:33:07.993 16:02:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:07.993 16:02:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:33:07.993 16:02:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:07.993 16:02:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:07.993 16:02:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:33:07.993 16:02:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:07.993 16:02:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:07.993 Process raid pid: 84376 00:33:07.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:07.993 16:02:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:33:07.993 16:02:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:33:07.993 16:02:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:33:07.993 16:02:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:33:07.993 16:02:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:33:07.993 16:02:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:33:07.993 16:02:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:33:07.993 16:02:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:33:07.993 16:02:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:33:07.993 16:02:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:33:07.993 16:02:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=84376 00:33:07.993 16:02:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84376' 00:33:07.993 16:02:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 84376 00:33:07.993 16:02:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 84376 ']' 00:33:07.993 16:02:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:07.993 16:02:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:07.993 16:02:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:07.993 16:02:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:07.993 16:02:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:33:07.993 16:02:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:07.993 [2024-11-05 16:02:40.277853] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:33:07.993 [2024-11-05 16:02:40.278126] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:08.250 [2024-11-05 16:02:40.434732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.250 [2024-11-05 16:02:40.532562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:08.510 [2024-11-05 16:02:40.667824] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:08.510 [2024-11-05 16:02:40.667865] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:08.768 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:08.768 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:33:08.768 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:33:08.768 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.768 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:08.768 [2024-11-05 16:02:41.131996] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:08.768 [2024-11-05 16:02:41.132044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:08.768 [2024-11-05 16:02:41.132054] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:08.768 [2024-11-05 16:02:41.132063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:08.768 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.768 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:33:08.768 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:08.768 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:08.768 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:08.768 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:08.768 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:08.769 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:08.769 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:08.769 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:08.769 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:08.769 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:08.769 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:08.769 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.769 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:08.769 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.769 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:08.769 "name": "Existed_Raid", 00:33:08.769 "uuid": "e53fb0b0-e589-4cd6-88d1-afd1a932e222", 00:33:08.769 "strip_size_kb": 0, 00:33:08.769 "state": "configuring", 00:33:08.769 "raid_level": "raid1", 00:33:08.769 "superblock": true, 00:33:08.769 "num_base_bdevs": 2, 00:33:08.769 "num_base_bdevs_discovered": 0, 00:33:08.769 "num_base_bdevs_operational": 2, 00:33:08.769 "base_bdevs_list": [ 00:33:08.769 { 00:33:08.769 "name": "BaseBdev1", 00:33:08.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:08.769 "is_configured": false, 00:33:08.769 "data_offset": 0, 00:33:08.769 "data_size": 0 00:33:08.769 }, 00:33:08.769 { 00:33:08.769 "name": "BaseBdev2", 00:33:08.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:08.769 "is_configured": false, 00:33:08.769 "data_offset": 0, 00:33:08.769 "data_size": 0 00:33:08.769 } 00:33:08.769 ] 00:33:08.769 }' 00:33:08.769 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:08.769 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:09.334 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:09.334 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.334 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:09.334 [2024-11-05 16:02:41.484012] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:09.334 [2024-11-05 16:02:41.484039] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:33:09.334 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:09.335 [2024-11-05 16:02:41.492006] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:09.335 [2024-11-05 16:02:41.492037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:09.335 [2024-11-05 16:02:41.492044] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:09.335 [2024-11-05 16:02:41.492053] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:09.335 [2024-11-05 16:02:41.519865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:09.335 BaseBdev1 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:09.335 [ 00:33:09.335 { 00:33:09.335 "name": "BaseBdev1", 00:33:09.335 "aliases": [ 00:33:09.335 "33e73ff8-b9a0-45fd-b59f-2af2e31da984" 00:33:09.335 ], 00:33:09.335 "product_name": "Malloc disk", 00:33:09.335 "block_size": 4096, 00:33:09.335 "num_blocks": 8192, 00:33:09.335 "uuid": "33e73ff8-b9a0-45fd-b59f-2af2e31da984", 00:33:09.335 "md_size": 32, 00:33:09.335 "md_interleave": false, 00:33:09.335 "dif_type": 0, 00:33:09.335 "assigned_rate_limits": { 00:33:09.335 "rw_ios_per_sec": 0, 00:33:09.335 "rw_mbytes_per_sec": 0, 00:33:09.335 "r_mbytes_per_sec": 0, 00:33:09.335 "w_mbytes_per_sec": 0 00:33:09.335 }, 00:33:09.335 "claimed": true, 00:33:09.335 "claim_type": "exclusive_write", 00:33:09.335 "zoned": false, 00:33:09.335 "supported_io_types": { 00:33:09.335 "read": true, 00:33:09.335 "write": true, 00:33:09.335 "unmap": true, 00:33:09.335 "flush": true, 00:33:09.335 "reset": true, 00:33:09.335 "nvme_admin": false, 00:33:09.335 "nvme_io": false, 00:33:09.335 "nvme_io_md": false, 00:33:09.335 "write_zeroes": true, 00:33:09.335 "zcopy": true, 00:33:09.335 "get_zone_info": false, 00:33:09.335 "zone_management": false, 00:33:09.335 "zone_append": false, 00:33:09.335 "compare": false, 00:33:09.335 "compare_and_write": false, 00:33:09.335 "abort": true, 00:33:09.335 "seek_hole": false, 00:33:09.335 "seek_data": false, 00:33:09.335 "copy": true, 00:33:09.335 "nvme_iov_md": false 00:33:09.335 }, 00:33:09.335 "memory_domains": [ 00:33:09.335 { 00:33:09.335 "dma_device_id": "system", 00:33:09.335 "dma_device_type": 1 00:33:09.335 }, 00:33:09.335 { 00:33:09.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:09.335 "dma_device_type": 2 00:33:09.335 } 00:33:09.335 ], 00:33:09.335 "driver_specific": {} 00:33:09.335 } 00:33:09.335 ] 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:09.335 "name": "Existed_Raid", 00:33:09.335 "uuid": "05895074-c150-4b65-b43a-2eb20d29e81e", 00:33:09.335 "strip_size_kb": 0, 00:33:09.335 "state": "configuring", 00:33:09.335 "raid_level": "raid1", 00:33:09.335 "superblock": true, 00:33:09.335 "num_base_bdevs": 2, 00:33:09.335 "num_base_bdevs_discovered": 1, 00:33:09.335 "num_base_bdevs_operational": 2, 00:33:09.335 "base_bdevs_list": [ 00:33:09.335 { 00:33:09.335 "name": "BaseBdev1", 00:33:09.335 "uuid": "33e73ff8-b9a0-45fd-b59f-2af2e31da984", 00:33:09.335 "is_configured": true, 00:33:09.335 "data_offset": 256, 00:33:09.335 "data_size": 7936 00:33:09.335 }, 00:33:09.335 { 00:33:09.335 "name": "BaseBdev2", 00:33:09.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:09.335 "is_configured": false, 00:33:09.335 "data_offset": 0, 00:33:09.335 "data_size": 0 00:33:09.335 } 00:33:09.335 ] 00:33:09.335 }' 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:09.335 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:09.594 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:09.594 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.594 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:09.594 [2024-11-05 16:02:41.839977] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:09.594 [2024-11-05 16:02:41.840016] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:33:09.594 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.594 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:33:09.594 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.594 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:09.594 [2024-11-05 16:02:41.848006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:09.594 [2024-11-05 16:02:41.849477] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:09.594 [2024-11-05 16:02:41.849508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:09.594 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.594 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:33:09.594 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:09.594 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:33:09.594 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:09.594 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:09.594 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:09.594 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:09.594 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:09.594 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:09.594 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:09.594 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:09.594 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:09.594 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:09.594 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:09.594 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.594 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:09.594 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.594 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:09.594 "name": "Existed_Raid", 00:33:09.594 "uuid": "caf638a5-e43b-47c7-94f3-fc4fe52db811", 00:33:09.594 "strip_size_kb": 0, 00:33:09.594 "state": "configuring", 00:33:09.594 "raid_level": "raid1", 00:33:09.594 "superblock": true, 00:33:09.594 "num_base_bdevs": 2, 00:33:09.594 "num_base_bdevs_discovered": 1, 00:33:09.594 "num_base_bdevs_operational": 2, 00:33:09.594 "base_bdevs_list": [ 00:33:09.594 { 00:33:09.594 "name": "BaseBdev1", 00:33:09.594 "uuid": "33e73ff8-b9a0-45fd-b59f-2af2e31da984", 00:33:09.594 "is_configured": true, 00:33:09.594 "data_offset": 256, 00:33:09.594 "data_size": 7936 00:33:09.594 }, 00:33:09.594 { 00:33:09.594 "name": "BaseBdev2", 00:33:09.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:09.594 "is_configured": false, 00:33:09.594 "data_offset": 0, 00:33:09.594 "data_size": 0 00:33:09.594 } 00:33:09.594 ] 00:33:09.594 }' 00:33:09.594 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:09.594 16:02:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:09.853 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:33:09.853 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.853 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:09.853 [2024-11-05 16:02:42.186815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:09.853 BaseBdev2 00:33:09.853 [2024-11-05 16:02:42.187122] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:33:09.853 [2024-11-05 16:02:42.187138] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:33:09.853 [2024-11-05 16:02:42.187205] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:33:09.853 [2024-11-05 16:02:42.187292] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:33:09.853 [2024-11-05 16:02:42.187301] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:33:09.853 [2024-11-05 16:02:42.187366] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:09.853 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.853 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:33:09.853 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:33:09.853 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:33:09.853 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:33:09.853 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:33:09.853 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:33:09.853 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:33:09.853 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.853 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:09.853 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.853 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:09.853 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.853 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:09.853 [ 00:33:09.853 { 00:33:09.853 "name": "BaseBdev2", 00:33:09.853 "aliases": [ 00:33:09.853 "583cc6b7-276a-45d6-9c0b-8852fe86ddea" 00:33:09.853 ], 00:33:09.853 "product_name": "Malloc disk", 00:33:09.853 "block_size": 4096, 00:33:09.853 "num_blocks": 8192, 00:33:09.853 "uuid": "583cc6b7-276a-45d6-9c0b-8852fe86ddea", 00:33:09.853 "md_size": 32, 00:33:09.853 "md_interleave": false, 00:33:09.853 "dif_type": 0, 00:33:09.853 "assigned_rate_limits": { 00:33:09.853 "rw_ios_per_sec": 0, 00:33:09.853 "rw_mbytes_per_sec": 0, 00:33:09.853 "r_mbytes_per_sec": 0, 00:33:09.853 "w_mbytes_per_sec": 0 00:33:09.853 }, 00:33:09.853 "claimed": true, 00:33:09.853 "claim_type": "exclusive_write", 00:33:09.853 "zoned": false, 00:33:09.853 "supported_io_types": { 00:33:09.853 "read": true, 00:33:09.853 "write": true, 00:33:09.853 "unmap": true, 00:33:09.853 "flush": true, 00:33:09.853 "reset": true, 00:33:09.853 "nvme_admin": false, 00:33:09.853 "nvme_io": false, 00:33:09.853 "nvme_io_md": false, 00:33:09.853 "write_zeroes": true, 00:33:09.853 "zcopy": true, 00:33:09.853 "get_zone_info": false, 00:33:09.853 "zone_management": false, 00:33:09.853 "zone_append": false, 00:33:09.853 "compare": false, 00:33:09.853 "compare_and_write": false, 00:33:09.853 "abort": true, 00:33:09.853 "seek_hole": false, 00:33:09.853 "seek_data": false, 00:33:09.853 "copy": true, 00:33:09.853 "nvme_iov_md": false 00:33:09.853 }, 00:33:09.853 "memory_domains": [ 00:33:09.853 { 00:33:09.853 "dma_device_id": "system", 00:33:09.853 "dma_device_type": 1 00:33:09.853 }, 00:33:09.853 { 00:33:09.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:09.853 "dma_device_type": 2 00:33:09.853 } 00:33:09.853 ], 00:33:09.853 "driver_specific": {} 00:33:09.853 } 00:33:09.853 ] 00:33:09.853 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.853 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:33:09.853 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:33:09.853 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:09.853 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:33:09.853 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:09.853 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:09.853 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:09.853 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:09.853 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:09.853 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:09.853 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:09.853 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:09.853 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:09.853 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:09.853 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:09.853 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.853 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:09.853 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.853 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:09.853 "name": "Existed_Raid", 00:33:09.853 "uuid": "caf638a5-e43b-47c7-94f3-fc4fe52db811", 00:33:09.853 "strip_size_kb": 0, 00:33:09.853 "state": "online", 00:33:09.853 "raid_level": "raid1", 00:33:09.853 "superblock": true, 00:33:09.853 "num_base_bdevs": 2, 00:33:09.853 "num_base_bdevs_discovered": 2, 00:33:09.853 "num_base_bdevs_operational": 2, 00:33:09.853 "base_bdevs_list": [ 00:33:09.853 { 00:33:09.853 "name": "BaseBdev1", 00:33:09.853 "uuid": "33e73ff8-b9a0-45fd-b59f-2af2e31da984", 00:33:09.853 "is_configured": true, 00:33:09.853 "data_offset": 256, 00:33:09.853 "data_size": 7936 00:33:09.853 }, 00:33:09.853 { 00:33:09.853 "name": "BaseBdev2", 00:33:09.853 "uuid": "583cc6b7-276a-45d6-9c0b-8852fe86ddea", 00:33:09.853 "is_configured": true, 00:33:09.853 "data_offset": 256, 00:33:09.853 "data_size": 7936 00:33:09.853 } 00:33:09.854 ] 00:33:09.854 }' 00:33:09.854 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:09.854 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:10.111 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:33:10.111 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:33:10.111 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:10.111 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:10.111 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:33:10.111 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:10.111 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:33:10.111 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.111 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:10.111 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:10.111 [2024-11-05 16:02:42.499188] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:10.111 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.370 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:10.370 "name": "Existed_Raid", 00:33:10.370 "aliases": [ 00:33:10.370 "caf638a5-e43b-47c7-94f3-fc4fe52db811" 00:33:10.370 ], 00:33:10.370 "product_name": "Raid Volume", 00:33:10.370 "block_size": 4096, 00:33:10.370 "num_blocks": 7936, 00:33:10.370 "uuid": "caf638a5-e43b-47c7-94f3-fc4fe52db811", 00:33:10.370 "md_size": 32, 00:33:10.370 "md_interleave": false, 00:33:10.370 "dif_type": 0, 00:33:10.370 "assigned_rate_limits": { 00:33:10.370 "rw_ios_per_sec": 0, 00:33:10.370 "rw_mbytes_per_sec": 0, 00:33:10.370 "r_mbytes_per_sec": 0, 00:33:10.370 "w_mbytes_per_sec": 0 00:33:10.370 }, 00:33:10.370 "claimed": false, 00:33:10.370 "zoned": false, 00:33:10.370 "supported_io_types": { 00:33:10.370 "read": true, 00:33:10.370 "write": true, 00:33:10.370 "unmap": false, 00:33:10.370 "flush": false, 00:33:10.370 "reset": true, 00:33:10.370 "nvme_admin": false, 00:33:10.370 "nvme_io": false, 00:33:10.370 "nvme_io_md": false, 00:33:10.370 "write_zeroes": true, 00:33:10.370 "zcopy": false, 00:33:10.370 "get_zone_info": false, 00:33:10.370 "zone_management": false, 00:33:10.370 "zone_append": false, 00:33:10.370 "compare": false, 00:33:10.370 "compare_and_write": false, 00:33:10.370 "abort": false, 00:33:10.370 "seek_hole": false, 00:33:10.370 "seek_data": false, 00:33:10.370 "copy": false, 00:33:10.370 "nvme_iov_md": false 00:33:10.370 }, 00:33:10.370 "memory_domains": [ 00:33:10.370 { 00:33:10.370 "dma_device_id": "system", 00:33:10.370 "dma_device_type": 1 00:33:10.370 }, 00:33:10.370 { 00:33:10.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:10.370 "dma_device_type": 2 00:33:10.370 }, 00:33:10.370 { 00:33:10.370 "dma_device_id": "system", 00:33:10.370 "dma_device_type": 1 00:33:10.370 }, 00:33:10.370 { 00:33:10.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:10.370 "dma_device_type": 2 00:33:10.370 } 00:33:10.370 ], 00:33:10.370 "driver_specific": { 00:33:10.370 "raid": { 00:33:10.370 "uuid": "caf638a5-e43b-47c7-94f3-fc4fe52db811", 00:33:10.370 "strip_size_kb": 0, 00:33:10.370 "state": "online", 00:33:10.370 "raid_level": "raid1", 00:33:10.370 "superblock": true, 00:33:10.370 "num_base_bdevs": 2, 00:33:10.370 "num_base_bdevs_discovered": 2, 00:33:10.370 "num_base_bdevs_operational": 2, 00:33:10.370 "base_bdevs_list": [ 00:33:10.370 { 00:33:10.370 "name": "BaseBdev1", 00:33:10.370 "uuid": "33e73ff8-b9a0-45fd-b59f-2af2e31da984", 00:33:10.370 "is_configured": true, 00:33:10.370 "data_offset": 256, 00:33:10.371 "data_size": 7936 00:33:10.371 }, 00:33:10.371 { 00:33:10.371 "name": "BaseBdev2", 00:33:10.371 "uuid": "583cc6b7-276a-45d6-9c0b-8852fe86ddea", 00:33:10.371 "is_configured": true, 00:33:10.371 "data_offset": 256, 00:33:10.371 "data_size": 7936 00:33:10.371 } 00:33:10.371 ] 00:33:10.371 } 00:33:10.371 } 00:33:10.371 }' 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:33:10.371 BaseBdev2' 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:10.371 [2024-11-05 16:02:42.662993] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:10.371 "name": "Existed_Raid", 00:33:10.371 "uuid": "caf638a5-e43b-47c7-94f3-fc4fe52db811", 00:33:10.371 "strip_size_kb": 0, 00:33:10.371 "state": "online", 00:33:10.371 "raid_level": "raid1", 00:33:10.371 "superblock": true, 00:33:10.371 "num_base_bdevs": 2, 00:33:10.371 "num_base_bdevs_discovered": 1, 00:33:10.371 "num_base_bdevs_operational": 1, 00:33:10.371 "base_bdevs_list": [ 00:33:10.371 { 00:33:10.371 "name": null, 00:33:10.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:10.371 "is_configured": false, 00:33:10.371 "data_offset": 0, 00:33:10.371 "data_size": 7936 00:33:10.371 }, 00:33:10.371 { 00:33:10.371 "name": "BaseBdev2", 00:33:10.371 "uuid": "583cc6b7-276a-45d6-9c0b-8852fe86ddea", 00:33:10.371 "is_configured": true, 00:33:10.371 "data_offset": 256, 00:33:10.371 "data_size": 7936 00:33:10.371 } 00:33:10.371 ] 00:33:10.371 }' 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:10.371 16:02:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:10.629 16:02:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:33:10.629 16:02:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:10.629 16:02:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:33:10.629 16:02:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:10.629 16:02:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.629 16:02:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:10.629 16:02:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.887 16:02:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:33:10.887 16:02:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:10.887 16:02:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:33:10.887 16:02:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.887 16:02:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:10.887 [2024-11-05 16:02:43.056402] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:10.887 [2024-11-05 16:02:43.056484] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:10.887 [2024-11-05 16:02:43.106437] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:10.887 [2024-11-05 16:02:43.106567] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:10.887 [2024-11-05 16:02:43.106696] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:33:10.887 16:02:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.887 16:02:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:33:10.887 16:02:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:10.887 16:02:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:10.887 16:02:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:33:10.887 16:02:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.887 16:02:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:10.887 16:02:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.887 16:02:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:33:10.887 16:02:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:33:10.887 16:02:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:33:10.887 16:02:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 84376 00:33:10.887 16:02:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 84376 ']' 00:33:10.887 16:02:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 84376 00:33:10.887 16:02:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:33:10.887 16:02:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:10.887 16:02:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84376 00:33:10.887 killing process with pid 84376 00:33:10.887 16:02:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:10.887 16:02:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:10.887 16:02:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84376' 00:33:10.887 16:02:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 84376 00:33:10.888 [2024-11-05 16:02:43.167078] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:10.888 16:02:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 84376 00:33:10.888 [2024-11-05 16:02:43.175192] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:11.512 ************************************ 00:33:11.512 END TEST raid_state_function_test_sb_md_separate 00:33:11.512 ************************************ 00:33:11.512 16:02:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:33:11.512 00:33:11.512 real 0m3.501s 00:33:11.512 user 0m5.148s 00:33:11.512 sys 0m0.569s 00:33:11.512 16:02:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:11.512 16:02:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:11.512 16:02:43 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:33:11.512 16:02:43 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:11.512 16:02:43 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:11.512 16:02:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:11.512 ************************************ 00:33:11.512 START TEST raid_superblock_test_md_separate 00:33:11.512 ************************************ 00:33:11.512 16:02:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:33:11.512 16:02:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:33:11.512 16:02:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:33:11.512 16:02:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:33:11.512 16:02:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:33:11.512 16:02:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:33:11.512 16:02:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:33:11.512 16:02:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:33:11.512 16:02:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:33:11.512 16:02:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:33:11.512 16:02:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:33:11.512 16:02:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:33:11.512 16:02:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:33:11.512 16:02:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:33:11.512 16:02:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:33:11.512 16:02:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:33:11.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:11.512 16:02:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=84612 00:33:11.512 16:02:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 84612 00:33:11.512 16:02:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # '[' -z 84612 ']' 00:33:11.512 16:02:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:11.512 16:02:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:11.512 16:02:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:11.512 16:02:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:11.512 16:02:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:33:11.512 16:02:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:11.512 [2024-11-05 16:02:43.821332] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:33:11.512 [2024-11-05 16:02:43.821615] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84612 ] 00:33:11.770 [2024-11-05 16:02:43.969905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:11.770 [2024-11-05 16:02:44.049860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:11.770 [2024-11-05 16:02:44.157138] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:11.770 [2024-11-05 16:02:44.157167] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:12.336 16:02:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:12.336 16:02:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@866 -- # return 0 00:33:12.336 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:33:12.336 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:33:12.336 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:33:12.336 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:33:12.336 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:33:12.336 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:12.336 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:33:12.336 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:12.337 malloc1 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:12.337 [2024-11-05 16:02:44.654649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:12.337 [2024-11-05 16:02:44.654839] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:12.337 [2024-11-05 16:02:44.654870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:33:12.337 [2024-11-05 16:02:44.654878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:12.337 [2024-11-05 16:02:44.656409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:12.337 [2024-11-05 16:02:44.656441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:12.337 pt1 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:12.337 malloc2 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:12.337 [2024-11-05 16:02:44.685834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:12.337 [2024-11-05 16:02:44.685885] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:12.337 [2024-11-05 16:02:44.685899] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:33:12.337 [2024-11-05 16:02:44.685906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:12.337 [2024-11-05 16:02:44.687391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:12.337 [2024-11-05 16:02:44.687418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:12.337 pt2 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:12.337 [2024-11-05 16:02:44.693871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:12.337 [2024-11-05 16:02:44.695326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:12.337 [2024-11-05 16:02:44.695456] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:33:12.337 [2024-11-05 16:02:44.695466] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:33:12.337 [2024-11-05 16:02:44.695525] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:33:12.337 [2024-11-05 16:02:44.695612] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:33:12.337 [2024-11-05 16:02:44.695621] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:33:12.337 [2024-11-05 16:02:44.695691] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:12.337 "name": "raid_bdev1", 00:33:12.337 "uuid": "0f45ac7b-af0f-44e5-bddc-208263859d6a", 00:33:12.337 "strip_size_kb": 0, 00:33:12.337 "state": "online", 00:33:12.337 "raid_level": "raid1", 00:33:12.337 "superblock": true, 00:33:12.337 "num_base_bdevs": 2, 00:33:12.337 "num_base_bdevs_discovered": 2, 00:33:12.337 "num_base_bdevs_operational": 2, 00:33:12.337 "base_bdevs_list": [ 00:33:12.337 { 00:33:12.337 "name": "pt1", 00:33:12.337 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:12.337 "is_configured": true, 00:33:12.337 "data_offset": 256, 00:33:12.337 "data_size": 7936 00:33:12.337 }, 00:33:12.337 { 00:33:12.337 "name": "pt2", 00:33:12.337 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:12.337 "is_configured": true, 00:33:12.337 "data_offset": 256, 00:33:12.337 "data_size": 7936 00:33:12.337 } 00:33:12.337 ] 00:33:12.337 }' 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:12.337 16:02:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:12.595 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:33:12.595 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:33:12.595 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:12.595 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:12.595 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:33:12.595 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:12.595 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:12.595 16:02:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:12.595 16:02:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.595 16:02:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:12.595 [2024-11-05 16:02:44.998281] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:12.854 "name": "raid_bdev1", 00:33:12.854 "aliases": [ 00:33:12.854 "0f45ac7b-af0f-44e5-bddc-208263859d6a" 00:33:12.854 ], 00:33:12.854 "product_name": "Raid Volume", 00:33:12.854 "block_size": 4096, 00:33:12.854 "num_blocks": 7936, 00:33:12.854 "uuid": "0f45ac7b-af0f-44e5-bddc-208263859d6a", 00:33:12.854 "md_size": 32, 00:33:12.854 "md_interleave": false, 00:33:12.854 "dif_type": 0, 00:33:12.854 "assigned_rate_limits": { 00:33:12.854 "rw_ios_per_sec": 0, 00:33:12.854 "rw_mbytes_per_sec": 0, 00:33:12.854 "r_mbytes_per_sec": 0, 00:33:12.854 "w_mbytes_per_sec": 0 00:33:12.854 }, 00:33:12.854 "claimed": false, 00:33:12.854 "zoned": false, 00:33:12.854 "supported_io_types": { 00:33:12.854 "read": true, 00:33:12.854 "write": true, 00:33:12.854 "unmap": false, 00:33:12.854 "flush": false, 00:33:12.854 "reset": true, 00:33:12.854 "nvme_admin": false, 00:33:12.854 "nvme_io": false, 00:33:12.854 "nvme_io_md": false, 00:33:12.854 "write_zeroes": true, 00:33:12.854 "zcopy": false, 00:33:12.854 "get_zone_info": false, 00:33:12.854 "zone_management": false, 00:33:12.854 "zone_append": false, 00:33:12.854 "compare": false, 00:33:12.854 "compare_and_write": false, 00:33:12.854 "abort": false, 00:33:12.854 "seek_hole": false, 00:33:12.854 "seek_data": false, 00:33:12.854 "copy": false, 00:33:12.854 "nvme_iov_md": false 00:33:12.854 }, 00:33:12.854 "memory_domains": [ 00:33:12.854 { 00:33:12.854 "dma_device_id": "system", 00:33:12.854 "dma_device_type": 1 00:33:12.854 }, 00:33:12.854 { 00:33:12.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:12.854 "dma_device_type": 2 00:33:12.854 }, 00:33:12.854 { 00:33:12.854 "dma_device_id": "system", 00:33:12.854 "dma_device_type": 1 00:33:12.854 }, 00:33:12.854 { 00:33:12.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:12.854 "dma_device_type": 2 00:33:12.854 } 00:33:12.854 ], 00:33:12.854 "driver_specific": { 00:33:12.854 "raid": { 00:33:12.854 "uuid": "0f45ac7b-af0f-44e5-bddc-208263859d6a", 00:33:12.854 "strip_size_kb": 0, 00:33:12.854 "state": "online", 00:33:12.854 "raid_level": "raid1", 00:33:12.854 "superblock": true, 00:33:12.854 "num_base_bdevs": 2, 00:33:12.854 "num_base_bdevs_discovered": 2, 00:33:12.854 "num_base_bdevs_operational": 2, 00:33:12.854 "base_bdevs_list": [ 00:33:12.854 { 00:33:12.854 "name": "pt1", 00:33:12.854 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:12.854 "is_configured": true, 00:33:12.854 "data_offset": 256, 00:33:12.854 "data_size": 7936 00:33:12.854 }, 00:33:12.854 { 00:33:12.854 "name": "pt2", 00:33:12.854 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:12.854 "is_configured": true, 00:33:12.854 "data_offset": 256, 00:33:12.854 "data_size": 7936 00:33:12.854 } 00:33:12.854 ] 00:33:12.854 } 00:33:12.854 } 00:33:12.854 }' 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:33:12.854 pt2' 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:12.854 [2024-11-05 16:02:45.162152] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0f45ac7b-af0f-44e5-bddc-208263859d6a 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 0f45ac7b-af0f-44e5-bddc-208263859d6a ']' 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:12.854 [2024-11-05 16:02:45.189913] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:12.854 [2024-11-05 16:02:45.189930] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:12.854 [2024-11-05 16:02:45.189987] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:12.854 [2024-11-05 16:02:45.190035] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:12.854 [2024-11-05 16:02:45.190050] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:33:12.854 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:33:12.855 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.855 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:12.855 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.855 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:33:12.855 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:33:12.855 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.855 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:12.855 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:13.113 [2024-11-05 16:02:45.285975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:33:13.113 [2024-11-05 16:02:45.287532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:33:13.113 [2024-11-05 16:02:45.287594] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:33:13.113 [2024-11-05 16:02:45.287639] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:33:13.113 [2024-11-05 16:02:45.287651] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:13.113 [2024-11-05 16:02:45.287659] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:33:13.113 request: 00:33:13.113 { 00:33:13.113 "name": "raid_bdev1", 00:33:13.113 "raid_level": "raid1", 00:33:13.113 "base_bdevs": [ 00:33:13.113 "malloc1", 00:33:13.113 "malloc2" 00:33:13.113 ], 00:33:13.113 "superblock": false, 00:33:13.113 "method": "bdev_raid_create", 00:33:13.113 "req_id": 1 00:33:13.113 } 00:33:13.113 Got JSON-RPC error response 00:33:13.113 response: 00:33:13.113 { 00:33:13.113 "code": -17, 00:33:13.113 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:33:13.113 } 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:13.113 [2024-11-05 16:02:45.329969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:13.113 [2024-11-05 16:02:45.330015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:13.113 [2024-11-05 16:02:45.330029] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:33:13.113 [2024-11-05 16:02:45.330038] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:13.113 [2024-11-05 16:02:45.331621] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:13.113 [2024-11-05 16:02:45.331653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:13.113 [2024-11-05 16:02:45.331692] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:13.113 [2024-11-05 16:02:45.331734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:13.113 pt1 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:13.113 "name": "raid_bdev1", 00:33:13.113 "uuid": "0f45ac7b-af0f-44e5-bddc-208263859d6a", 00:33:13.113 "strip_size_kb": 0, 00:33:13.113 "state": "configuring", 00:33:13.113 "raid_level": "raid1", 00:33:13.113 "superblock": true, 00:33:13.113 "num_base_bdevs": 2, 00:33:13.113 "num_base_bdevs_discovered": 1, 00:33:13.113 "num_base_bdevs_operational": 2, 00:33:13.113 "base_bdevs_list": [ 00:33:13.113 { 00:33:13.113 "name": "pt1", 00:33:13.113 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:13.113 "is_configured": true, 00:33:13.113 "data_offset": 256, 00:33:13.113 "data_size": 7936 00:33:13.113 }, 00:33:13.113 { 00:33:13.113 "name": null, 00:33:13.113 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:13.113 "is_configured": false, 00:33:13.113 "data_offset": 256, 00:33:13.113 "data_size": 7936 00:33:13.113 } 00:33:13.113 ] 00:33:13.113 }' 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:13.113 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:13.372 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:33:13.372 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:33:13.372 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:33:13.372 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:13.372 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.372 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:13.372 [2024-11-05 16:02:45.634035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:13.372 [2024-11-05 16:02:45.634099] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:13.372 [2024-11-05 16:02:45.634112] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:33:13.372 [2024-11-05 16:02:45.634121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:13.372 [2024-11-05 16:02:45.634287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:13.372 [2024-11-05 16:02:45.634298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:13.372 [2024-11-05 16:02:45.634336] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:13.372 [2024-11-05 16:02:45.634352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:13.372 [2024-11-05 16:02:45.634435] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:33:13.372 [2024-11-05 16:02:45.634444] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:33:13.372 [2024-11-05 16:02:45.634494] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:33:13.372 [2024-11-05 16:02:45.634582] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:33:13.372 [2024-11-05 16:02:45.634589] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:33:13.372 [2024-11-05 16:02:45.634658] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:13.372 pt2 00:33:13.372 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.372 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:33:13.372 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:33:13.372 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:13.372 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:13.372 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:13.372 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:13.372 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:13.372 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:13.372 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:13.372 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:13.372 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:13.372 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:13.372 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:13.372 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:13.372 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.372 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:13.372 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.372 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:13.372 "name": "raid_bdev1", 00:33:13.372 "uuid": "0f45ac7b-af0f-44e5-bddc-208263859d6a", 00:33:13.372 "strip_size_kb": 0, 00:33:13.372 "state": "online", 00:33:13.372 "raid_level": "raid1", 00:33:13.372 "superblock": true, 00:33:13.372 "num_base_bdevs": 2, 00:33:13.372 "num_base_bdevs_discovered": 2, 00:33:13.372 "num_base_bdevs_operational": 2, 00:33:13.372 "base_bdevs_list": [ 00:33:13.372 { 00:33:13.372 "name": "pt1", 00:33:13.372 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:13.372 "is_configured": true, 00:33:13.372 "data_offset": 256, 00:33:13.372 "data_size": 7936 00:33:13.372 }, 00:33:13.372 { 00:33:13.372 "name": "pt2", 00:33:13.372 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:13.372 "is_configured": true, 00:33:13.372 "data_offset": 256, 00:33:13.372 "data_size": 7936 00:33:13.372 } 00:33:13.372 ] 00:33:13.372 }' 00:33:13.372 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:13.372 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:13.631 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:33:13.631 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:33:13.631 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:13.631 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:13.631 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:33:13.631 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:13.631 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:13.631 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:13.631 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.631 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:13.631 [2024-11-05 16:02:45.946320] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:13.631 16:02:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.631 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:13.631 "name": "raid_bdev1", 00:33:13.631 "aliases": [ 00:33:13.631 "0f45ac7b-af0f-44e5-bddc-208263859d6a" 00:33:13.631 ], 00:33:13.631 "product_name": "Raid Volume", 00:33:13.631 "block_size": 4096, 00:33:13.631 "num_blocks": 7936, 00:33:13.631 "uuid": "0f45ac7b-af0f-44e5-bddc-208263859d6a", 00:33:13.631 "md_size": 32, 00:33:13.631 "md_interleave": false, 00:33:13.631 "dif_type": 0, 00:33:13.631 "assigned_rate_limits": { 00:33:13.631 "rw_ios_per_sec": 0, 00:33:13.631 "rw_mbytes_per_sec": 0, 00:33:13.631 "r_mbytes_per_sec": 0, 00:33:13.631 "w_mbytes_per_sec": 0 00:33:13.631 }, 00:33:13.631 "claimed": false, 00:33:13.631 "zoned": false, 00:33:13.631 "supported_io_types": { 00:33:13.631 "read": true, 00:33:13.631 "write": true, 00:33:13.631 "unmap": false, 00:33:13.631 "flush": false, 00:33:13.631 "reset": true, 00:33:13.631 "nvme_admin": false, 00:33:13.631 "nvme_io": false, 00:33:13.631 "nvme_io_md": false, 00:33:13.631 "write_zeroes": true, 00:33:13.631 "zcopy": false, 00:33:13.631 "get_zone_info": false, 00:33:13.631 "zone_management": false, 00:33:13.631 "zone_append": false, 00:33:13.631 "compare": false, 00:33:13.631 "compare_and_write": false, 00:33:13.631 "abort": false, 00:33:13.631 "seek_hole": false, 00:33:13.631 "seek_data": false, 00:33:13.631 "copy": false, 00:33:13.631 "nvme_iov_md": false 00:33:13.631 }, 00:33:13.631 "memory_domains": [ 00:33:13.631 { 00:33:13.631 "dma_device_id": "system", 00:33:13.631 "dma_device_type": 1 00:33:13.631 }, 00:33:13.631 { 00:33:13.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:13.631 "dma_device_type": 2 00:33:13.631 }, 00:33:13.631 { 00:33:13.631 "dma_device_id": "system", 00:33:13.631 "dma_device_type": 1 00:33:13.631 }, 00:33:13.631 { 00:33:13.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:13.631 "dma_device_type": 2 00:33:13.631 } 00:33:13.631 ], 00:33:13.631 "driver_specific": { 00:33:13.631 "raid": { 00:33:13.631 "uuid": "0f45ac7b-af0f-44e5-bddc-208263859d6a", 00:33:13.631 "strip_size_kb": 0, 00:33:13.631 "state": "online", 00:33:13.631 "raid_level": "raid1", 00:33:13.631 "superblock": true, 00:33:13.631 "num_base_bdevs": 2, 00:33:13.631 "num_base_bdevs_discovered": 2, 00:33:13.631 "num_base_bdevs_operational": 2, 00:33:13.631 "base_bdevs_list": [ 00:33:13.631 { 00:33:13.631 "name": "pt1", 00:33:13.631 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:13.631 "is_configured": true, 00:33:13.631 "data_offset": 256, 00:33:13.631 "data_size": 7936 00:33:13.631 }, 00:33:13.631 { 00:33:13.631 "name": "pt2", 00:33:13.631 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:13.631 "is_configured": true, 00:33:13.631 "data_offset": 256, 00:33:13.631 "data_size": 7936 00:33:13.631 } 00:33:13.631 ] 00:33:13.631 } 00:33:13.631 } 00:33:13.631 }' 00:33:13.631 16:02:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:13.631 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:33:13.631 pt2' 00:33:13.631 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:13.631 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:33:13.631 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:13.631 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:13.631 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:33:13.631 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.631 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:13.891 [2024-11-05 16:02:46.110322] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 0f45ac7b-af0f-44e5-bddc-208263859d6a '!=' 0f45ac7b-af0f-44e5-bddc-208263859d6a ']' 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:13.891 [2024-11-05 16:02:46.134140] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:13.891 "name": "raid_bdev1", 00:33:13.891 "uuid": "0f45ac7b-af0f-44e5-bddc-208263859d6a", 00:33:13.891 "strip_size_kb": 0, 00:33:13.891 "state": "online", 00:33:13.891 "raid_level": "raid1", 00:33:13.891 "superblock": true, 00:33:13.891 "num_base_bdevs": 2, 00:33:13.891 "num_base_bdevs_discovered": 1, 00:33:13.891 "num_base_bdevs_operational": 1, 00:33:13.891 "base_bdevs_list": [ 00:33:13.891 { 00:33:13.891 "name": null, 00:33:13.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:13.891 "is_configured": false, 00:33:13.891 "data_offset": 0, 00:33:13.891 "data_size": 7936 00:33:13.891 }, 00:33:13.891 { 00:33:13.891 "name": "pt2", 00:33:13.891 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:13.891 "is_configured": true, 00:33:13.891 "data_offset": 256, 00:33:13.891 "data_size": 7936 00:33:13.891 } 00:33:13.891 ] 00:33:13.891 }' 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:13.891 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:14.152 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:14.152 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.152 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:14.152 [2024-11-05 16:02:46.474180] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:14.152 [2024-11-05 16:02:46.474201] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:14.152 [2024-11-05 16:02:46.474256] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:14.152 [2024-11-05 16:02:46.474294] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:14.152 [2024-11-05 16:02:46.474303] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:33:14.152 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.152 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:33:14.152 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:14.152 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.152 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:14.152 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.152 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:33:14.152 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:33:14.152 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:33:14.152 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:33:14.152 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:33:14.152 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.152 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:14.152 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.152 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:33:14.152 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:33:14.152 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:33:14.152 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:33:14.152 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:33:14.152 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:14.152 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.152 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:14.152 [2024-11-05 16:02:46.522185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:14.152 [2024-11-05 16:02:46.522233] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:14.152 [2024-11-05 16:02:46.522245] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:33:14.152 [2024-11-05 16:02:46.522254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:14.152 [2024-11-05 16:02:46.523859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:14.152 [2024-11-05 16:02:46.523898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:14.152 [2024-11-05 16:02:46.523936] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:14.152 [2024-11-05 16:02:46.523971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:14.152 [2024-11-05 16:02:46.524037] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:33:14.152 [2024-11-05 16:02:46.524047] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:33:14.152 [2024-11-05 16:02:46.524100] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:33:14.152 [2024-11-05 16:02:46.524174] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:33:14.152 [2024-11-05 16:02:46.524180] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:33:14.152 [2024-11-05 16:02:46.524247] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:14.152 pt2 00:33:14.153 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.153 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:14.153 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:14.153 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:14.153 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:14.153 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:14.153 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:14.153 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:14.153 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:14.153 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:14.153 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:14.153 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:14.153 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:14.153 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.153 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:14.153 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.153 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:14.153 "name": "raid_bdev1", 00:33:14.153 "uuid": "0f45ac7b-af0f-44e5-bddc-208263859d6a", 00:33:14.153 "strip_size_kb": 0, 00:33:14.153 "state": "online", 00:33:14.153 "raid_level": "raid1", 00:33:14.153 "superblock": true, 00:33:14.153 "num_base_bdevs": 2, 00:33:14.153 "num_base_bdevs_discovered": 1, 00:33:14.153 "num_base_bdevs_operational": 1, 00:33:14.153 "base_bdevs_list": [ 00:33:14.153 { 00:33:14.153 "name": null, 00:33:14.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:14.153 "is_configured": false, 00:33:14.153 "data_offset": 256, 00:33:14.153 "data_size": 7936 00:33:14.153 }, 00:33:14.153 { 00:33:14.153 "name": "pt2", 00:33:14.153 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:14.153 "is_configured": true, 00:33:14.153 "data_offset": 256, 00:33:14.153 "data_size": 7936 00:33:14.153 } 00:33:14.153 ] 00:33:14.153 }' 00:33:14.153 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:14.153 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:14.724 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:14.724 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.724 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:14.724 [2024-11-05 16:02:46.834222] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:14.724 [2024-11-05 16:02:46.834244] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:14.724 [2024-11-05 16:02:46.834297] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:14.725 [2024-11-05 16:02:46.834333] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:14.725 [2024-11-05 16:02:46.834340] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:33:14.725 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.725 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:33:14.725 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:14.725 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.725 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:14.725 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.725 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:33:14.725 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:33:14.725 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:33:14.725 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:14.725 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.725 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:14.725 [2024-11-05 16:02:46.878240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:14.725 [2024-11-05 16:02:46.878281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:14.725 [2024-11-05 16:02:46.878295] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:33:14.725 [2024-11-05 16:02:46.878301] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:14.725 [2024-11-05 16:02:46.879862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:14.725 [2024-11-05 16:02:46.879887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:14.725 [2024-11-05 16:02:46.879925] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:14.725 [2024-11-05 16:02:46.879956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:14.725 [2024-11-05 16:02:46.880046] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:33:14.725 [2024-11-05 16:02:46.880054] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:14.725 [2024-11-05 16:02:46.880066] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:33:14.725 [2024-11-05 16:02:46.880105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:14.725 [2024-11-05 16:02:46.880149] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:33:14.725 [2024-11-05 16:02:46.880155] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:33:14.725 [2024-11-05 16:02:46.880208] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:33:14.725 [2024-11-05 16:02:46.880279] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:33:14.725 [2024-11-05 16:02:46.880290] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:33:14.725 [2024-11-05 16:02:46.880360] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:14.725 pt1 00:33:14.725 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.725 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:33:14.725 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:14.725 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:14.725 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:14.725 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:14.725 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:14.725 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:14.725 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:14.725 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:14.725 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:14.725 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:14.725 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:14.725 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.725 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:14.725 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:14.725 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.725 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:14.725 "name": "raid_bdev1", 00:33:14.725 "uuid": "0f45ac7b-af0f-44e5-bddc-208263859d6a", 00:33:14.725 "strip_size_kb": 0, 00:33:14.725 "state": "online", 00:33:14.725 "raid_level": "raid1", 00:33:14.725 "superblock": true, 00:33:14.725 "num_base_bdevs": 2, 00:33:14.725 "num_base_bdevs_discovered": 1, 00:33:14.725 "num_base_bdevs_operational": 1, 00:33:14.725 "base_bdevs_list": [ 00:33:14.725 { 00:33:14.725 "name": null, 00:33:14.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:14.725 "is_configured": false, 00:33:14.725 "data_offset": 256, 00:33:14.725 "data_size": 7936 00:33:14.725 }, 00:33:14.725 { 00:33:14.725 "name": "pt2", 00:33:14.725 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:14.725 "is_configured": true, 00:33:14.725 "data_offset": 256, 00:33:14.725 "data_size": 7936 00:33:14.725 } 00:33:14.725 ] 00:33:14.725 }' 00:33:14.725 16:02:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:14.725 16:02:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:14.986 16:02:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:33:14.986 16:02:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:33:14.986 16:02:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.986 16:02:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:14.986 16:02:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.986 16:02:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:33:14.986 16:02:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:33:14.986 16:02:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:14.986 16:02:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.986 16:02:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:14.986 [2024-11-05 16:02:47.234504] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:14.986 16:02:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.986 16:02:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 0f45ac7b-af0f-44e5-bddc-208263859d6a '!=' 0f45ac7b-af0f-44e5-bddc-208263859d6a ']' 00:33:14.986 16:02:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 84612 00:33:14.986 16:02:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # '[' -z 84612 ']' 00:33:14.986 16:02:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # kill -0 84612 00:33:14.986 16:02:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # uname 00:33:14.986 16:02:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:14.986 16:02:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84612 00:33:14.986 killing process with pid 84612 00:33:14.986 16:02:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:14.986 16:02:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:14.986 16:02:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84612' 00:33:14.986 16:02:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@971 -- # kill 84612 00:33:14.986 [2024-11-05 16:02:47.288316] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:14.986 16:02:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@976 -- # wait 84612 00:33:14.986 [2024-11-05 16:02:47.288379] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:14.986 [2024-11-05 16:02:47.288416] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:14.986 [2024-11-05 16:02:47.288429] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:33:14.986 [2024-11-05 16:02:47.395330] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:15.557 ************************************ 00:33:15.557 END TEST raid_superblock_test_md_separate 00:33:15.557 ************************************ 00:33:15.557 16:02:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:33:15.557 00:33:15.557 real 0m4.187s 00:33:15.557 user 0m6.400s 00:33:15.557 sys 0m0.726s 00:33:15.557 16:02:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:15.557 16:02:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:15.819 16:02:47 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:33:15.819 16:02:47 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:33:15.819 16:02:47 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:33:15.819 16:02:47 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:15.819 16:02:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:15.819 ************************************ 00:33:15.819 START TEST raid_rebuild_test_sb_md_separate 00:33:15.819 ************************************ 00:33:15.819 16:02:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:33:15.819 16:02:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:33:15.819 16:02:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:33:15.819 16:02:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:33:15.819 16:02:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:33:15.819 16:02:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:33:15.819 16:02:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:33:15.819 16:02:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:15.819 16:02:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:33:15.819 16:02:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:33:15.819 16:02:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:15.819 16:02:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:33:15.819 16:02:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:33:15.819 16:02:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:15.819 16:02:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:33:15.819 16:02:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:33:15.819 16:02:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:33:15.819 16:02:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:33:15.819 16:02:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:33:15.819 16:02:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:33:15.819 16:02:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:33:15.819 16:02:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:33:15.819 16:02:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:33:15.819 16:02:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:33:15.819 16:02:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:33:15.819 16:02:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=84918 00:33:15.819 16:02:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 84918 00:33:15.819 16:02:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 84918 ']' 00:33:15.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:15.819 16:02:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:15.819 16:02:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:15.819 16:02:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:15.819 16:02:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:15.819 16:02:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:15.819 16:02:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:33:15.819 [2024-11-05 16:02:48.046993] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:33:15.819 I/O size of 3145728 is greater than zero copy threshold (65536). 00:33:15.819 Zero copy mechanism will not be used. 00:33:15.819 [2024-11-05 16:02:48.047194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84918 ] 00:33:15.819 [2024-11-05 16:02:48.196533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:16.081 [2024-11-05 16:02:48.273811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:16.081 [2024-11-05 16:02:48.380386] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:16.081 [2024-11-05 16:02:48.380414] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:16.653 16:02:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:16.653 16:02:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:33:16.653 16:02:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:33:16.653 16:02:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:33:16.653 16:02:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.653 16:02:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:16.653 BaseBdev1_malloc 00:33:16.653 16:02:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.653 16:02:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:16.653 16:02:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.653 16:02:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:16.653 [2024-11-05 16:02:48.915303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:16.653 [2024-11-05 16:02:48.915464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:16.653 [2024-11-05 16:02:48.915488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:33:16.653 [2024-11-05 16:02:48.915496] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:16.653 [2024-11-05 16:02:48.917019] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:16.653 [2024-11-05 16:02:48.917048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:16.653 BaseBdev1 00:33:16.653 16:02:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.653 16:02:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:33:16.653 16:02:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:33:16.653 16:02:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.653 16:02:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:16.653 BaseBdev2_malloc 00:33:16.653 16:02:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.653 16:02:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:33:16.653 16:02:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.653 16:02:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:16.653 [2024-11-05 16:02:48.946662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:33:16.653 [2024-11-05 16:02:48.946796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:16.653 [2024-11-05 16:02:48.946815] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:33:16.653 [2024-11-05 16:02:48.946824] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:16.653 [2024-11-05 16:02:48.948296] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:16.653 [2024-11-05 16:02:48.948325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:16.653 BaseBdev2 00:33:16.653 16:02:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.653 16:02:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:33:16.653 16:02:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.653 16:02:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:16.653 spare_malloc 00:33:16.653 16:02:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.653 16:02:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:33:16.653 16:02:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.653 16:02:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:16.653 spare_delay 00:33:16.653 16:02:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.653 16:02:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:33:16.653 16:02:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.653 16:02:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:16.653 [2024-11-05 16:02:49.000490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:16.653 [2024-11-05 16:02:49.000532] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:16.653 [2024-11-05 16:02:49.000547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:33:16.653 [2024-11-05 16:02:49.000555] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:16.653 [2024-11-05 16:02:49.002063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:16.654 [2024-11-05 16:02:49.002183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:16.654 spare 00:33:16.654 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.654 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:33:16.654 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.654 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:16.654 [2024-11-05 16:02:49.008525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:16.654 [2024-11-05 16:02:49.010052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:16.654 [2024-11-05 16:02:49.010234] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:33:16.654 [2024-11-05 16:02:49.010296] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:33:16.654 [2024-11-05 16:02:49.010367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:33:16.654 [2024-11-05 16:02:49.010511] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:33:16.654 [2024-11-05 16:02:49.010630] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:33:16.654 [2024-11-05 16:02:49.010756] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:16.654 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.654 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:16.654 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:16.654 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:16.654 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:16.654 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:16.654 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:16.654 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:16.654 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:16.654 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:16.654 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:16.654 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:16.654 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:16.654 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.654 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:16.654 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.654 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:16.654 "name": "raid_bdev1", 00:33:16.654 "uuid": "60fb6ba1-7aba-4ca2-a8ff-4e0387c1b001", 00:33:16.654 "strip_size_kb": 0, 00:33:16.654 "state": "online", 00:33:16.654 "raid_level": "raid1", 00:33:16.654 "superblock": true, 00:33:16.654 "num_base_bdevs": 2, 00:33:16.654 "num_base_bdevs_discovered": 2, 00:33:16.654 "num_base_bdevs_operational": 2, 00:33:16.654 "base_bdevs_list": [ 00:33:16.654 { 00:33:16.654 "name": "BaseBdev1", 00:33:16.654 "uuid": "d3caf61b-632c-538f-8540-1aed86d4d650", 00:33:16.654 "is_configured": true, 00:33:16.654 "data_offset": 256, 00:33:16.654 "data_size": 7936 00:33:16.654 }, 00:33:16.654 { 00:33:16.654 "name": "BaseBdev2", 00:33:16.654 "uuid": "b0a7af79-2826-58c3-85be-e86fea72b6a5", 00:33:16.654 "is_configured": true, 00:33:16.654 "data_offset": 256, 00:33:16.654 "data_size": 7936 00:33:16.654 } 00:33:16.654 ] 00:33:16.654 }' 00:33:16.654 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:16.654 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:17.265 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:17.265 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:33:17.265 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.265 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:17.265 [2024-11-05 16:02:49.352836] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:17.265 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.265 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:33:17.265 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:17.265 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.265 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:17.265 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:33:17.265 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.265 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:33:17.265 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:33:17.265 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:33:17.265 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:33:17.265 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:33:17.265 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:33:17.265 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:33:17.265 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:17.265 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:33:17.265 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:17.265 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:33:17.265 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:17.265 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:17.265 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:33:17.265 [2024-11-05 16:02:49.592669] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:33:17.265 /dev/nbd0 00:33:17.265 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:17.265 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:17.265 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:33:17.265 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:33:17.265 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:33:17.265 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:33:17.265 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:33:17.265 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:33:17.265 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:33:17.266 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:33:17.266 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:17.266 1+0 records in 00:33:17.266 1+0 records out 00:33:17.266 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248128 s, 16.5 MB/s 00:33:17.266 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:17.266 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:33:17.266 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:17.266 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:33:17.266 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:33:17.266 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:17.266 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:17.266 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:33:17.266 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:33:17.266 16:02:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:33:18.209 7936+0 records in 00:33:18.209 7936+0 records out 00:33:18.209 32505856 bytes (33 MB, 31 MiB) copied, 0.638719 s, 50.9 MB/s 00:33:18.209 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:33:18.209 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:33:18.209 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:18.209 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:18.209 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:33:18.209 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:18.209 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:33:18.209 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:18.210 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:18.210 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:18.210 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:18.210 [2024-11-05 16:02:50.494727] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:18.210 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:18.210 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:18.210 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:33:18.210 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:33:18.210 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:33:18.210 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.210 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:18.210 [2024-11-05 16:02:50.503093] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:18.210 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.210 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:18.210 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:18.210 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:18.210 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:18.210 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:18.210 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:18.210 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:18.210 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:18.210 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:18.210 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:18.210 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:18.210 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:18.210 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.210 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:18.210 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.210 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:18.210 "name": "raid_bdev1", 00:33:18.210 "uuid": "60fb6ba1-7aba-4ca2-a8ff-4e0387c1b001", 00:33:18.210 "strip_size_kb": 0, 00:33:18.210 "state": "online", 00:33:18.210 "raid_level": "raid1", 00:33:18.210 "superblock": true, 00:33:18.210 "num_base_bdevs": 2, 00:33:18.210 "num_base_bdevs_discovered": 1, 00:33:18.210 "num_base_bdevs_operational": 1, 00:33:18.210 "base_bdevs_list": [ 00:33:18.210 { 00:33:18.210 "name": null, 00:33:18.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:18.210 "is_configured": false, 00:33:18.210 "data_offset": 0, 00:33:18.210 "data_size": 7936 00:33:18.210 }, 00:33:18.210 { 00:33:18.210 "name": "BaseBdev2", 00:33:18.210 "uuid": "b0a7af79-2826-58c3-85be-e86fea72b6a5", 00:33:18.210 "is_configured": true, 00:33:18.210 "data_offset": 256, 00:33:18.210 "data_size": 7936 00:33:18.210 } 00:33:18.210 ] 00:33:18.210 }' 00:33:18.210 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:18.210 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:18.470 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:33:18.470 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.470 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:18.471 [2024-11-05 16:02:50.827169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:18.471 [2024-11-05 16:02:50.834836] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:33:18.471 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.471 16:02:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:33:18.471 [2024-11-05 16:02:50.836375] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:19.858 16:02:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:19.858 16:02:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:19.858 16:02:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:19.858 16:02:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:19.858 16:02:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:19.858 16:02:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:19.858 16:02:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:19.858 16:02:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.858 16:02:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:19.858 16:02:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.858 16:02:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:19.858 "name": "raid_bdev1", 00:33:19.858 "uuid": "60fb6ba1-7aba-4ca2-a8ff-4e0387c1b001", 00:33:19.858 "strip_size_kb": 0, 00:33:19.858 "state": "online", 00:33:19.858 "raid_level": "raid1", 00:33:19.858 "superblock": true, 00:33:19.858 "num_base_bdevs": 2, 00:33:19.858 "num_base_bdevs_discovered": 2, 00:33:19.858 "num_base_bdevs_operational": 2, 00:33:19.858 "process": { 00:33:19.858 "type": "rebuild", 00:33:19.858 "target": "spare", 00:33:19.858 "progress": { 00:33:19.859 "blocks": 2560, 00:33:19.859 "percent": 32 00:33:19.859 } 00:33:19.859 }, 00:33:19.859 "base_bdevs_list": [ 00:33:19.859 { 00:33:19.859 "name": "spare", 00:33:19.859 "uuid": "c353dc95-72b3-537c-9892-b98a96492e90", 00:33:19.859 "is_configured": true, 00:33:19.859 "data_offset": 256, 00:33:19.859 "data_size": 7936 00:33:19.859 }, 00:33:19.859 { 00:33:19.859 "name": "BaseBdev2", 00:33:19.859 "uuid": "b0a7af79-2826-58c3-85be-e86fea72b6a5", 00:33:19.859 "is_configured": true, 00:33:19.859 "data_offset": 256, 00:33:19.859 "data_size": 7936 00:33:19.859 } 00:33:19.859 ] 00:33:19.859 }' 00:33:19.859 16:02:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:19.859 16:02:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:19.859 16:02:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:19.859 16:02:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:19.859 16:02:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:33:19.859 16:02:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.859 16:02:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:19.859 [2024-11-05 16:02:51.950962] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:19.859 [2024-11-05 16:02:52.041708] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:19.859 [2024-11-05 16:02:52.041760] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:19.859 [2024-11-05 16:02:52.041772] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:19.859 [2024-11-05 16:02:52.041780] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:19.859 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.859 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:19.859 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:19.859 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:19.859 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:19.859 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:19.859 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:19.859 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:19.859 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:19.859 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:19.859 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:19.859 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:19.859 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:19.859 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.859 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:19.859 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.859 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:19.859 "name": "raid_bdev1", 00:33:19.859 "uuid": "60fb6ba1-7aba-4ca2-a8ff-4e0387c1b001", 00:33:19.859 "strip_size_kb": 0, 00:33:19.859 "state": "online", 00:33:19.859 "raid_level": "raid1", 00:33:19.859 "superblock": true, 00:33:19.859 "num_base_bdevs": 2, 00:33:19.859 "num_base_bdevs_discovered": 1, 00:33:19.859 "num_base_bdevs_operational": 1, 00:33:19.859 "base_bdevs_list": [ 00:33:19.859 { 00:33:19.859 "name": null, 00:33:19.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:19.859 "is_configured": false, 00:33:19.859 "data_offset": 0, 00:33:19.859 "data_size": 7936 00:33:19.859 }, 00:33:19.859 { 00:33:19.859 "name": "BaseBdev2", 00:33:19.859 "uuid": "b0a7af79-2826-58c3-85be-e86fea72b6a5", 00:33:19.859 "is_configured": true, 00:33:19.859 "data_offset": 256, 00:33:19.859 "data_size": 7936 00:33:19.859 } 00:33:19.859 ] 00:33:19.859 }' 00:33:19.859 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:19.859 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:20.120 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:20.120 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:20.120 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:20.120 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:20.120 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:20.120 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:20.120 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:20.120 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.120 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:20.120 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.120 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:20.120 "name": "raid_bdev1", 00:33:20.120 "uuid": "60fb6ba1-7aba-4ca2-a8ff-4e0387c1b001", 00:33:20.120 "strip_size_kb": 0, 00:33:20.120 "state": "online", 00:33:20.120 "raid_level": "raid1", 00:33:20.120 "superblock": true, 00:33:20.120 "num_base_bdevs": 2, 00:33:20.120 "num_base_bdevs_discovered": 1, 00:33:20.120 "num_base_bdevs_operational": 1, 00:33:20.120 "base_bdevs_list": [ 00:33:20.120 { 00:33:20.120 "name": null, 00:33:20.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:20.120 "is_configured": false, 00:33:20.120 "data_offset": 0, 00:33:20.120 "data_size": 7936 00:33:20.120 }, 00:33:20.120 { 00:33:20.120 "name": "BaseBdev2", 00:33:20.120 "uuid": "b0a7af79-2826-58c3-85be-e86fea72b6a5", 00:33:20.120 "is_configured": true, 00:33:20.120 "data_offset": 256, 00:33:20.120 "data_size": 7936 00:33:20.120 } 00:33:20.120 ] 00:33:20.120 }' 00:33:20.120 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:20.120 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:20.120 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:20.120 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:20.120 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:33:20.120 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.120 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:20.120 [2024-11-05 16:02:52.465765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:20.121 [2024-11-05 16:02:52.473067] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:33:20.121 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.121 16:02:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:33:20.121 [2024-11-05 16:02:52.474644] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:21.063 16:02:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:21.063 16:02:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:21.063 16:02:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:21.063 16:02:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:21.063 16:02:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:21.321 16:02:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:21.321 16:02:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:21.321 16:02:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.321 16:02:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:21.321 16:02:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.321 16:02:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:21.321 "name": "raid_bdev1", 00:33:21.321 "uuid": "60fb6ba1-7aba-4ca2-a8ff-4e0387c1b001", 00:33:21.321 "strip_size_kb": 0, 00:33:21.321 "state": "online", 00:33:21.321 "raid_level": "raid1", 00:33:21.321 "superblock": true, 00:33:21.321 "num_base_bdevs": 2, 00:33:21.321 "num_base_bdevs_discovered": 2, 00:33:21.321 "num_base_bdevs_operational": 2, 00:33:21.321 "process": { 00:33:21.321 "type": "rebuild", 00:33:21.321 "target": "spare", 00:33:21.321 "progress": { 00:33:21.321 "blocks": 2560, 00:33:21.321 "percent": 32 00:33:21.321 } 00:33:21.321 }, 00:33:21.321 "base_bdevs_list": [ 00:33:21.321 { 00:33:21.321 "name": "spare", 00:33:21.321 "uuid": "c353dc95-72b3-537c-9892-b98a96492e90", 00:33:21.321 "is_configured": true, 00:33:21.321 "data_offset": 256, 00:33:21.321 "data_size": 7936 00:33:21.321 }, 00:33:21.321 { 00:33:21.321 "name": "BaseBdev2", 00:33:21.321 "uuid": "b0a7af79-2826-58c3-85be-e86fea72b6a5", 00:33:21.321 "is_configured": true, 00:33:21.321 "data_offset": 256, 00:33:21.321 "data_size": 7936 00:33:21.321 } 00:33:21.321 ] 00:33:21.321 }' 00:33:21.321 16:02:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:21.321 16:02:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:21.321 16:02:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:21.321 16:02:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:21.321 16:02:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:33:21.321 16:02:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:33:21.321 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:33:21.321 16:02:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:33:21.321 16:02:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:33:21.321 16:02:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:33:21.321 16:02:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=549 00:33:21.321 16:02:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:21.321 16:02:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:21.321 16:02:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:21.321 16:02:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:21.321 16:02:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:21.321 16:02:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:21.321 16:02:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:21.321 16:02:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:21.321 16:02:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.321 16:02:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:21.321 16:02:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.321 16:02:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:21.321 "name": "raid_bdev1", 00:33:21.321 "uuid": "60fb6ba1-7aba-4ca2-a8ff-4e0387c1b001", 00:33:21.321 "strip_size_kb": 0, 00:33:21.321 "state": "online", 00:33:21.321 "raid_level": "raid1", 00:33:21.321 "superblock": true, 00:33:21.321 "num_base_bdevs": 2, 00:33:21.321 "num_base_bdevs_discovered": 2, 00:33:21.321 "num_base_bdevs_operational": 2, 00:33:21.321 "process": { 00:33:21.321 "type": "rebuild", 00:33:21.321 "target": "spare", 00:33:21.321 "progress": { 00:33:21.321 "blocks": 2816, 00:33:21.321 "percent": 35 00:33:21.321 } 00:33:21.321 }, 00:33:21.321 "base_bdevs_list": [ 00:33:21.321 { 00:33:21.321 "name": "spare", 00:33:21.321 "uuid": "c353dc95-72b3-537c-9892-b98a96492e90", 00:33:21.321 "is_configured": true, 00:33:21.321 "data_offset": 256, 00:33:21.321 "data_size": 7936 00:33:21.321 }, 00:33:21.321 { 00:33:21.321 "name": "BaseBdev2", 00:33:21.321 "uuid": "b0a7af79-2826-58c3-85be-e86fea72b6a5", 00:33:21.321 "is_configured": true, 00:33:21.321 "data_offset": 256, 00:33:21.321 "data_size": 7936 00:33:21.321 } 00:33:21.321 ] 00:33:21.321 }' 00:33:21.321 16:02:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:21.321 16:02:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:21.321 16:02:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:21.321 16:02:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:21.321 16:02:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:22.697 16:02:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:22.697 16:02:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:22.697 16:02:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:22.697 16:02:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:22.697 16:02:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:22.697 16:02:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:22.697 16:02:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:22.697 16:02:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.697 16:02:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:22.697 16:02:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:22.697 16:02:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.697 16:02:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:22.697 "name": "raid_bdev1", 00:33:22.697 "uuid": "60fb6ba1-7aba-4ca2-a8ff-4e0387c1b001", 00:33:22.697 "strip_size_kb": 0, 00:33:22.697 "state": "online", 00:33:22.697 "raid_level": "raid1", 00:33:22.697 "superblock": true, 00:33:22.697 "num_base_bdevs": 2, 00:33:22.697 "num_base_bdevs_discovered": 2, 00:33:22.697 "num_base_bdevs_operational": 2, 00:33:22.697 "process": { 00:33:22.697 "type": "rebuild", 00:33:22.697 "target": "spare", 00:33:22.697 "progress": { 00:33:22.697 "blocks": 5632, 00:33:22.697 "percent": 70 00:33:22.697 } 00:33:22.697 }, 00:33:22.697 "base_bdevs_list": [ 00:33:22.697 { 00:33:22.697 "name": "spare", 00:33:22.697 "uuid": "c353dc95-72b3-537c-9892-b98a96492e90", 00:33:22.697 "is_configured": true, 00:33:22.697 "data_offset": 256, 00:33:22.697 "data_size": 7936 00:33:22.697 }, 00:33:22.697 { 00:33:22.697 "name": "BaseBdev2", 00:33:22.697 "uuid": "b0a7af79-2826-58c3-85be-e86fea72b6a5", 00:33:22.697 "is_configured": true, 00:33:22.697 "data_offset": 256, 00:33:22.697 "data_size": 7936 00:33:22.697 } 00:33:22.697 ] 00:33:22.697 }' 00:33:22.697 16:02:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:22.697 16:02:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:22.697 16:02:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:22.697 16:02:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:22.697 16:02:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:23.263 [2024-11-05 16:02:55.587253] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:33:23.264 [2024-11-05 16:02:55.587309] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:33:23.264 [2024-11-05 16:02:55.587386] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:23.521 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:23.521 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:23.521 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:23.521 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:23.521 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:23.521 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:23.521 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:23.521 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:23.521 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.521 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:23.521 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.521 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:23.521 "name": "raid_bdev1", 00:33:23.521 "uuid": "60fb6ba1-7aba-4ca2-a8ff-4e0387c1b001", 00:33:23.521 "strip_size_kb": 0, 00:33:23.521 "state": "online", 00:33:23.521 "raid_level": "raid1", 00:33:23.521 "superblock": true, 00:33:23.521 "num_base_bdevs": 2, 00:33:23.521 "num_base_bdevs_discovered": 2, 00:33:23.521 "num_base_bdevs_operational": 2, 00:33:23.521 "base_bdevs_list": [ 00:33:23.521 { 00:33:23.521 "name": "spare", 00:33:23.521 "uuid": "c353dc95-72b3-537c-9892-b98a96492e90", 00:33:23.521 "is_configured": true, 00:33:23.521 "data_offset": 256, 00:33:23.521 "data_size": 7936 00:33:23.521 }, 00:33:23.521 { 00:33:23.521 "name": "BaseBdev2", 00:33:23.521 "uuid": "b0a7af79-2826-58c3-85be-e86fea72b6a5", 00:33:23.521 "is_configured": true, 00:33:23.521 "data_offset": 256, 00:33:23.521 "data_size": 7936 00:33:23.521 } 00:33:23.521 ] 00:33:23.521 }' 00:33:23.521 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:23.521 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:33:23.521 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:23.521 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:33:23.521 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:33:23.521 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:23.521 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:23.521 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:23.521 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:23.521 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:23.521 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:23.521 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:23.521 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.521 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:23.521 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.521 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:23.521 "name": "raid_bdev1", 00:33:23.521 "uuid": "60fb6ba1-7aba-4ca2-a8ff-4e0387c1b001", 00:33:23.521 "strip_size_kb": 0, 00:33:23.521 "state": "online", 00:33:23.521 "raid_level": "raid1", 00:33:23.521 "superblock": true, 00:33:23.521 "num_base_bdevs": 2, 00:33:23.521 "num_base_bdevs_discovered": 2, 00:33:23.521 "num_base_bdevs_operational": 2, 00:33:23.521 "base_bdevs_list": [ 00:33:23.521 { 00:33:23.521 "name": "spare", 00:33:23.521 "uuid": "c353dc95-72b3-537c-9892-b98a96492e90", 00:33:23.522 "is_configured": true, 00:33:23.522 "data_offset": 256, 00:33:23.522 "data_size": 7936 00:33:23.522 }, 00:33:23.522 { 00:33:23.522 "name": "BaseBdev2", 00:33:23.522 "uuid": "b0a7af79-2826-58c3-85be-e86fea72b6a5", 00:33:23.522 "is_configured": true, 00:33:23.522 "data_offset": 256, 00:33:23.522 "data_size": 7936 00:33:23.522 } 00:33:23.522 ] 00:33:23.522 }' 00:33:23.522 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:23.780 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:23.780 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:23.780 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:23.780 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:23.780 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:23.780 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:23.780 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:23.780 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:23.780 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:23.780 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:23.780 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:23.780 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:23.780 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:23.780 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:23.780 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.780 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:23.781 16:02:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:23.781 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.781 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:23.781 "name": "raid_bdev1", 00:33:23.781 "uuid": "60fb6ba1-7aba-4ca2-a8ff-4e0387c1b001", 00:33:23.781 "strip_size_kb": 0, 00:33:23.781 "state": "online", 00:33:23.781 "raid_level": "raid1", 00:33:23.781 "superblock": true, 00:33:23.781 "num_base_bdevs": 2, 00:33:23.781 "num_base_bdevs_discovered": 2, 00:33:23.781 "num_base_bdevs_operational": 2, 00:33:23.781 "base_bdevs_list": [ 00:33:23.781 { 00:33:23.781 "name": "spare", 00:33:23.781 "uuid": "c353dc95-72b3-537c-9892-b98a96492e90", 00:33:23.781 "is_configured": true, 00:33:23.781 "data_offset": 256, 00:33:23.781 "data_size": 7936 00:33:23.781 }, 00:33:23.781 { 00:33:23.781 "name": "BaseBdev2", 00:33:23.781 "uuid": "b0a7af79-2826-58c3-85be-e86fea72b6a5", 00:33:23.781 "is_configured": true, 00:33:23.781 "data_offset": 256, 00:33:23.781 "data_size": 7936 00:33:23.781 } 00:33:23.781 ] 00:33:23.781 }' 00:33:23.781 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:23.781 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:24.039 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:24.039 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.039 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:24.039 [2024-11-05 16:02:56.319148] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:24.039 [2024-11-05 16:02:56.319260] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:24.039 [2024-11-05 16:02:56.319332] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:24.039 [2024-11-05 16:02:56.319389] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:24.039 [2024-11-05 16:02:56.319397] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:33:24.039 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.039 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:24.039 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:33:24.039 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.039 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:24.039 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.039 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:33:24.039 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:33:24.039 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:33:24.039 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:33:24.039 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:33:24.039 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:33:24.039 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:24.039 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:24.039 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:24.039 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:33:24.039 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:24.039 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:24.039 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:33:24.298 /dev/nbd0 00:33:24.298 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:24.298 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:24.298 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:33:24.298 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:33:24.298 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:33:24.298 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:33:24.298 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:33:24.298 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:33:24.298 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:33:24.298 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:33:24.298 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:24.298 1+0 records in 00:33:24.298 1+0 records out 00:33:24.298 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000451381 s, 9.1 MB/s 00:33:24.298 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:24.298 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:33:24.298 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:24.298 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:33:24.298 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:33:24.298 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:24.298 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:24.298 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:33:24.557 /dev/nbd1 00:33:24.557 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:24.557 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:24.557 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:33:24.557 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:33:24.557 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:33:24.557 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:33:24.557 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:33:24.557 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:33:24.557 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:33:24.557 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:33:24.557 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:24.557 1+0 records in 00:33:24.557 1+0 records out 00:33:24.557 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274893 s, 14.9 MB/s 00:33:24.557 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:24.557 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:33:24.557 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:24.557 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:33:24.557 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:33:24.557 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:24.557 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:24.557 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:33:24.557 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:33:24.557 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:33:24.557 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:24.557 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:24.557 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:33:24.557 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:24.557 16:02:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:33:24.815 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:24.815 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:24.815 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:24.815 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:24.815 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:24.815 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:24.815 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:33:24.815 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:33:24.815 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:24.815 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:33:25.074 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:25.074 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:25.074 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:25.074 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:25.074 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:25.074 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:25.074 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:33:25.074 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:33:25.074 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:33:25.074 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:33:25.074 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.074 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:25.074 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.074 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:33:25.074 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.074 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:25.074 [2024-11-05 16:02:57.365105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:25.074 [2024-11-05 16:02:57.365240] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:25.074 [2024-11-05 16:02:57.365264] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:33:25.074 [2024-11-05 16:02:57.365271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:25.074 [2024-11-05 16:02:57.366836] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:25.074 [2024-11-05 16:02:57.366874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:25.074 [2024-11-05 16:02:57.366921] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:33:25.074 [2024-11-05 16:02:57.366962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:25.074 [2024-11-05 16:02:57.367056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:25.074 spare 00:33:25.074 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.074 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:33:25.074 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.074 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:25.074 [2024-11-05 16:02:57.467115] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:33:25.074 [2024-11-05 16:02:57.467137] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:33:25.074 [2024-11-05 16:02:57.467212] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:33:25.074 [2024-11-05 16:02:57.467319] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:33:25.074 [2024-11-05 16:02:57.467326] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:33:25.074 [2024-11-05 16:02:57.467414] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:25.074 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.074 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:25.074 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:25.074 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:25.074 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:25.074 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:25.074 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:25.074 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:25.074 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:25.074 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:25.074 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:25.074 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:25.074 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:25.074 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.074 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:25.074 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.332 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:25.332 "name": "raid_bdev1", 00:33:25.332 "uuid": "60fb6ba1-7aba-4ca2-a8ff-4e0387c1b001", 00:33:25.332 "strip_size_kb": 0, 00:33:25.332 "state": "online", 00:33:25.332 "raid_level": "raid1", 00:33:25.332 "superblock": true, 00:33:25.332 "num_base_bdevs": 2, 00:33:25.332 "num_base_bdevs_discovered": 2, 00:33:25.332 "num_base_bdevs_operational": 2, 00:33:25.332 "base_bdevs_list": [ 00:33:25.332 { 00:33:25.332 "name": "spare", 00:33:25.332 "uuid": "c353dc95-72b3-537c-9892-b98a96492e90", 00:33:25.332 "is_configured": true, 00:33:25.332 "data_offset": 256, 00:33:25.332 "data_size": 7936 00:33:25.332 }, 00:33:25.332 { 00:33:25.332 "name": "BaseBdev2", 00:33:25.332 "uuid": "b0a7af79-2826-58c3-85be-e86fea72b6a5", 00:33:25.332 "is_configured": true, 00:33:25.332 "data_offset": 256, 00:33:25.332 "data_size": 7936 00:33:25.332 } 00:33:25.332 ] 00:33:25.332 }' 00:33:25.332 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:25.332 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:25.591 "name": "raid_bdev1", 00:33:25.591 "uuid": "60fb6ba1-7aba-4ca2-a8ff-4e0387c1b001", 00:33:25.591 "strip_size_kb": 0, 00:33:25.591 "state": "online", 00:33:25.591 "raid_level": "raid1", 00:33:25.591 "superblock": true, 00:33:25.591 "num_base_bdevs": 2, 00:33:25.591 "num_base_bdevs_discovered": 2, 00:33:25.591 "num_base_bdevs_operational": 2, 00:33:25.591 "base_bdevs_list": [ 00:33:25.591 { 00:33:25.591 "name": "spare", 00:33:25.591 "uuid": "c353dc95-72b3-537c-9892-b98a96492e90", 00:33:25.591 "is_configured": true, 00:33:25.591 "data_offset": 256, 00:33:25.591 "data_size": 7936 00:33:25.591 }, 00:33:25.591 { 00:33:25.591 "name": "BaseBdev2", 00:33:25.591 "uuid": "b0a7af79-2826-58c3-85be-e86fea72b6a5", 00:33:25.591 "is_configured": true, 00:33:25.591 "data_offset": 256, 00:33:25.591 "data_size": 7936 00:33:25.591 } 00:33:25.591 ] 00:33:25.591 }' 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:25.591 [2024-11-05 16:02:57.921232] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.591 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:25.591 "name": "raid_bdev1", 00:33:25.591 "uuid": "60fb6ba1-7aba-4ca2-a8ff-4e0387c1b001", 00:33:25.591 "strip_size_kb": 0, 00:33:25.591 "state": "online", 00:33:25.591 "raid_level": "raid1", 00:33:25.591 "superblock": true, 00:33:25.591 "num_base_bdevs": 2, 00:33:25.591 "num_base_bdevs_discovered": 1, 00:33:25.591 "num_base_bdevs_operational": 1, 00:33:25.591 "base_bdevs_list": [ 00:33:25.591 { 00:33:25.591 "name": null, 00:33:25.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:25.591 "is_configured": false, 00:33:25.591 "data_offset": 0, 00:33:25.591 "data_size": 7936 00:33:25.591 }, 00:33:25.591 { 00:33:25.591 "name": "BaseBdev2", 00:33:25.591 "uuid": "b0a7af79-2826-58c3-85be-e86fea72b6a5", 00:33:25.591 "is_configured": true, 00:33:25.591 "data_offset": 256, 00:33:25.591 "data_size": 7936 00:33:25.591 } 00:33:25.591 ] 00:33:25.591 }' 00:33:25.592 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:25.592 16:02:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:25.850 16:02:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:33:25.850 16:02:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.850 16:02:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:25.850 [2024-11-05 16:02:58.245318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:25.850 [2024-11-05 16:02:58.245453] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:33:25.850 [2024-11-05 16:02:58.245466] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:33:25.850 [2024-11-05 16:02:58.245497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:25.850 [2024-11-05 16:02:58.252489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:33:25.850 16:02:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.850 16:02:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:33:25.850 [2024-11-05 16:02:58.254023] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:27.225 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:27.225 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:27.225 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:27.225 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:27.225 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:27.225 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:27.225 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.225 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:27.225 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:27.225 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.225 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:27.225 "name": "raid_bdev1", 00:33:27.225 "uuid": "60fb6ba1-7aba-4ca2-a8ff-4e0387c1b001", 00:33:27.225 "strip_size_kb": 0, 00:33:27.225 "state": "online", 00:33:27.225 "raid_level": "raid1", 00:33:27.225 "superblock": true, 00:33:27.225 "num_base_bdevs": 2, 00:33:27.225 "num_base_bdevs_discovered": 2, 00:33:27.225 "num_base_bdevs_operational": 2, 00:33:27.225 "process": { 00:33:27.225 "type": "rebuild", 00:33:27.225 "target": "spare", 00:33:27.225 "progress": { 00:33:27.225 "blocks": 2560, 00:33:27.225 "percent": 32 00:33:27.225 } 00:33:27.225 }, 00:33:27.225 "base_bdevs_list": [ 00:33:27.225 { 00:33:27.225 "name": "spare", 00:33:27.225 "uuid": "c353dc95-72b3-537c-9892-b98a96492e90", 00:33:27.225 "is_configured": true, 00:33:27.225 "data_offset": 256, 00:33:27.225 "data_size": 7936 00:33:27.225 }, 00:33:27.225 { 00:33:27.225 "name": "BaseBdev2", 00:33:27.226 "uuid": "b0a7af79-2826-58c3-85be-e86fea72b6a5", 00:33:27.226 "is_configured": true, 00:33:27.226 "data_offset": 256, 00:33:27.226 "data_size": 7936 00:33:27.226 } 00:33:27.226 ] 00:33:27.226 }' 00:33:27.226 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:27.226 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:27.226 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:27.226 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:27.226 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:33:27.226 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.226 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:27.226 [2024-11-05 16:02:59.361170] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:27.226 [2024-11-05 16:02:59.459047] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:27.226 [2024-11-05 16:02:59.459179] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:27.226 [2024-11-05 16:02:59.459195] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:27.226 [2024-11-05 16:02:59.459208] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:27.226 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.226 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:27.226 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:27.226 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:27.226 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:27.226 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:27.226 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:27.226 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:27.226 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:27.226 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:27.226 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:27.226 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:27.226 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.226 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:27.226 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:27.226 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.226 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:27.226 "name": "raid_bdev1", 00:33:27.226 "uuid": "60fb6ba1-7aba-4ca2-a8ff-4e0387c1b001", 00:33:27.226 "strip_size_kb": 0, 00:33:27.226 "state": "online", 00:33:27.226 "raid_level": "raid1", 00:33:27.226 "superblock": true, 00:33:27.226 "num_base_bdevs": 2, 00:33:27.226 "num_base_bdevs_discovered": 1, 00:33:27.226 "num_base_bdevs_operational": 1, 00:33:27.226 "base_bdevs_list": [ 00:33:27.226 { 00:33:27.226 "name": null, 00:33:27.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:27.226 "is_configured": false, 00:33:27.226 "data_offset": 0, 00:33:27.226 "data_size": 7936 00:33:27.226 }, 00:33:27.226 { 00:33:27.226 "name": "BaseBdev2", 00:33:27.226 "uuid": "b0a7af79-2826-58c3-85be-e86fea72b6a5", 00:33:27.226 "is_configured": true, 00:33:27.226 "data_offset": 256, 00:33:27.226 "data_size": 7936 00:33:27.226 } 00:33:27.226 ] 00:33:27.226 }' 00:33:27.226 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:27.226 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:27.484 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:33:27.484 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.484 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:27.484 [2024-11-05 16:02:59.799154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:27.484 [2024-11-05 16:02:59.799203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:27.484 [2024-11-05 16:02:59.799223] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:33:27.484 [2024-11-05 16:02:59.799233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:27.484 [2024-11-05 16:02:59.799417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:27.484 [2024-11-05 16:02:59.799429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:27.484 [2024-11-05 16:02:59.799472] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:33:27.484 [2024-11-05 16:02:59.799482] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:33:27.484 [2024-11-05 16:02:59.799489] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:33:27.484 [2024-11-05 16:02:59.799504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:27.484 [2024-11-05 16:02:59.806327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:33:27.484 spare 00:33:27.484 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.484 16:02:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:33:27.484 [2024-11-05 16:02:59.807922] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:28.419 16:03:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:28.419 16:03:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:28.419 16:03:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:28.419 16:03:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:28.419 16:03:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:28.419 16:03:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:28.419 16:03:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.419 16:03:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:28.419 16:03:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:28.419 16:03:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.677 16:03:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:28.677 "name": "raid_bdev1", 00:33:28.677 "uuid": "60fb6ba1-7aba-4ca2-a8ff-4e0387c1b001", 00:33:28.677 "strip_size_kb": 0, 00:33:28.677 "state": "online", 00:33:28.677 "raid_level": "raid1", 00:33:28.677 "superblock": true, 00:33:28.677 "num_base_bdevs": 2, 00:33:28.677 "num_base_bdevs_discovered": 2, 00:33:28.677 "num_base_bdevs_operational": 2, 00:33:28.677 "process": { 00:33:28.677 "type": "rebuild", 00:33:28.677 "target": "spare", 00:33:28.677 "progress": { 00:33:28.677 "blocks": 2560, 00:33:28.677 "percent": 32 00:33:28.677 } 00:33:28.677 }, 00:33:28.677 "base_bdevs_list": [ 00:33:28.677 { 00:33:28.677 "name": "spare", 00:33:28.677 "uuid": "c353dc95-72b3-537c-9892-b98a96492e90", 00:33:28.677 "is_configured": true, 00:33:28.677 "data_offset": 256, 00:33:28.677 "data_size": 7936 00:33:28.677 }, 00:33:28.677 { 00:33:28.677 "name": "BaseBdev2", 00:33:28.677 "uuid": "b0a7af79-2826-58c3-85be-e86fea72b6a5", 00:33:28.677 "is_configured": true, 00:33:28.677 "data_offset": 256, 00:33:28.677 "data_size": 7936 00:33:28.677 } 00:33:28.677 ] 00:33:28.677 }' 00:33:28.677 16:03:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:28.677 16:03:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:28.677 16:03:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:28.677 16:03:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:28.677 16:03:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:33:28.678 16:03:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.678 16:03:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:28.678 [2024-11-05 16:03:00.922355] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:28.678 [2024-11-05 16:03:01.012917] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:28.678 [2024-11-05 16:03:01.012976] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:28.678 [2024-11-05 16:03:01.012989] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:28.678 [2024-11-05 16:03:01.012995] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:28.678 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.678 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:28.678 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:28.678 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:28.678 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:28.678 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:28.678 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:28.678 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:28.678 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:28.678 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:28.678 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:28.678 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:28.678 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.678 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:28.678 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:28.678 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.678 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:28.678 "name": "raid_bdev1", 00:33:28.678 "uuid": "60fb6ba1-7aba-4ca2-a8ff-4e0387c1b001", 00:33:28.678 "strip_size_kb": 0, 00:33:28.678 "state": "online", 00:33:28.678 "raid_level": "raid1", 00:33:28.678 "superblock": true, 00:33:28.678 "num_base_bdevs": 2, 00:33:28.678 "num_base_bdevs_discovered": 1, 00:33:28.678 "num_base_bdevs_operational": 1, 00:33:28.678 "base_bdevs_list": [ 00:33:28.678 { 00:33:28.678 "name": null, 00:33:28.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:28.678 "is_configured": false, 00:33:28.678 "data_offset": 0, 00:33:28.678 "data_size": 7936 00:33:28.678 }, 00:33:28.678 { 00:33:28.678 "name": "BaseBdev2", 00:33:28.678 "uuid": "b0a7af79-2826-58c3-85be-e86fea72b6a5", 00:33:28.678 "is_configured": true, 00:33:28.678 "data_offset": 256, 00:33:28.678 "data_size": 7936 00:33:28.678 } 00:33:28.678 ] 00:33:28.678 }' 00:33:28.678 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:28.678 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:28.953 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:28.953 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:28.953 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:28.953 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:28.953 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:28.953 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:28.953 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:28.953 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.953 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:28.953 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.953 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:28.953 "name": "raid_bdev1", 00:33:28.953 "uuid": "60fb6ba1-7aba-4ca2-a8ff-4e0387c1b001", 00:33:28.953 "strip_size_kb": 0, 00:33:28.953 "state": "online", 00:33:28.953 "raid_level": "raid1", 00:33:28.953 "superblock": true, 00:33:28.953 "num_base_bdevs": 2, 00:33:28.953 "num_base_bdevs_discovered": 1, 00:33:28.953 "num_base_bdevs_operational": 1, 00:33:28.953 "base_bdevs_list": [ 00:33:28.953 { 00:33:28.953 "name": null, 00:33:28.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:28.953 "is_configured": false, 00:33:28.953 "data_offset": 0, 00:33:28.953 "data_size": 7936 00:33:28.953 }, 00:33:28.953 { 00:33:28.953 "name": "BaseBdev2", 00:33:28.953 "uuid": "b0a7af79-2826-58c3-85be-e86fea72b6a5", 00:33:28.953 "is_configured": true, 00:33:28.953 "data_offset": 256, 00:33:28.953 "data_size": 7936 00:33:28.953 } 00:33:28.953 ] 00:33:28.953 }' 00:33:29.212 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:29.212 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:29.212 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:29.212 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:29.212 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:33:29.212 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.212 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:29.212 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.212 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:29.212 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.212 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:29.212 [2024-11-05 16:03:01.444782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:29.212 [2024-11-05 16:03:01.444987] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:29.212 [2024-11-05 16:03:01.445010] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:33:29.212 [2024-11-05 16:03:01.445018] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:29.212 [2024-11-05 16:03:01.445179] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:29.212 [2024-11-05 16:03:01.445189] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:29.212 [2024-11-05 16:03:01.445226] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:33:29.212 [2024-11-05 16:03:01.445235] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:33:29.212 [2024-11-05 16:03:01.445243] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:29.212 [2024-11-05 16:03:01.445251] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:33:29.212 BaseBdev1 00:33:29.212 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.212 16:03:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:33:30.149 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:30.149 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:30.149 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:30.149 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:30.149 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:30.149 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:30.149 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:30.149 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:30.149 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:30.149 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:30.149 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:30.149 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:30.149 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.149 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:30.149 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.149 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:30.149 "name": "raid_bdev1", 00:33:30.149 "uuid": "60fb6ba1-7aba-4ca2-a8ff-4e0387c1b001", 00:33:30.149 "strip_size_kb": 0, 00:33:30.149 "state": "online", 00:33:30.149 "raid_level": "raid1", 00:33:30.149 "superblock": true, 00:33:30.149 "num_base_bdevs": 2, 00:33:30.149 "num_base_bdevs_discovered": 1, 00:33:30.149 "num_base_bdevs_operational": 1, 00:33:30.149 "base_bdevs_list": [ 00:33:30.149 { 00:33:30.149 "name": null, 00:33:30.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:30.149 "is_configured": false, 00:33:30.149 "data_offset": 0, 00:33:30.149 "data_size": 7936 00:33:30.149 }, 00:33:30.149 { 00:33:30.149 "name": "BaseBdev2", 00:33:30.149 "uuid": "b0a7af79-2826-58c3-85be-e86fea72b6a5", 00:33:30.149 "is_configured": true, 00:33:30.149 "data_offset": 256, 00:33:30.149 "data_size": 7936 00:33:30.149 } 00:33:30.149 ] 00:33:30.149 }' 00:33:30.149 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:30.149 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:30.408 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:30.408 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:30.408 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:30.408 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:30.408 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:30.408 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:30.408 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:30.408 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.408 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:30.408 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.408 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:30.408 "name": "raid_bdev1", 00:33:30.408 "uuid": "60fb6ba1-7aba-4ca2-a8ff-4e0387c1b001", 00:33:30.408 "strip_size_kb": 0, 00:33:30.408 "state": "online", 00:33:30.408 "raid_level": "raid1", 00:33:30.408 "superblock": true, 00:33:30.408 "num_base_bdevs": 2, 00:33:30.408 "num_base_bdevs_discovered": 1, 00:33:30.408 "num_base_bdevs_operational": 1, 00:33:30.408 "base_bdevs_list": [ 00:33:30.408 { 00:33:30.408 "name": null, 00:33:30.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:30.408 "is_configured": false, 00:33:30.408 "data_offset": 0, 00:33:30.408 "data_size": 7936 00:33:30.408 }, 00:33:30.408 { 00:33:30.408 "name": "BaseBdev2", 00:33:30.408 "uuid": "b0a7af79-2826-58c3-85be-e86fea72b6a5", 00:33:30.408 "is_configured": true, 00:33:30.408 "data_offset": 256, 00:33:30.408 "data_size": 7936 00:33:30.408 } 00:33:30.408 ] 00:33:30.408 }' 00:33:30.408 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:30.667 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:30.667 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:30.667 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:30.667 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:30.667 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:33:30.667 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:30.667 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:30.667 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:30.667 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:30.667 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:30.667 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:30.667 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.667 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:30.667 [2024-11-05 16:03:02.885110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:30.667 [2024-11-05 16:03:02.885230] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:33:30.667 [2024-11-05 16:03:02.885241] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:30.667 request: 00:33:30.667 { 00:33:30.667 "base_bdev": "BaseBdev1", 00:33:30.667 "raid_bdev": "raid_bdev1", 00:33:30.667 "method": "bdev_raid_add_base_bdev", 00:33:30.667 "req_id": 1 00:33:30.667 } 00:33:30.667 Got JSON-RPC error response 00:33:30.667 response: 00:33:30.667 { 00:33:30.667 "code": -22, 00:33:30.667 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:33:30.667 } 00:33:30.667 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:30.667 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:33:30.667 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:30.667 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:30.667 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:30.667 16:03:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:33:31.600 16:03:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:31.600 16:03:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:31.600 16:03:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:31.600 16:03:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:31.601 16:03:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:31.601 16:03:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:31.601 16:03:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:31.601 16:03:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:31.601 16:03:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:31.601 16:03:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:31.601 16:03:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:31.601 16:03:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:31.601 16:03:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.601 16:03:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:31.601 16:03:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.601 16:03:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:31.601 "name": "raid_bdev1", 00:33:31.601 "uuid": "60fb6ba1-7aba-4ca2-a8ff-4e0387c1b001", 00:33:31.601 "strip_size_kb": 0, 00:33:31.601 "state": "online", 00:33:31.601 "raid_level": "raid1", 00:33:31.601 "superblock": true, 00:33:31.601 "num_base_bdevs": 2, 00:33:31.601 "num_base_bdevs_discovered": 1, 00:33:31.601 "num_base_bdevs_operational": 1, 00:33:31.601 "base_bdevs_list": [ 00:33:31.601 { 00:33:31.601 "name": null, 00:33:31.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:31.601 "is_configured": false, 00:33:31.601 "data_offset": 0, 00:33:31.601 "data_size": 7936 00:33:31.601 }, 00:33:31.601 { 00:33:31.601 "name": "BaseBdev2", 00:33:31.601 "uuid": "b0a7af79-2826-58c3-85be-e86fea72b6a5", 00:33:31.601 "is_configured": true, 00:33:31.601 "data_offset": 256, 00:33:31.601 "data_size": 7936 00:33:31.601 } 00:33:31.601 ] 00:33:31.601 }' 00:33:31.601 16:03:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:31.601 16:03:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:31.859 16:03:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:31.859 16:03:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:31.859 16:03:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:31.859 16:03:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:31.859 16:03:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:31.859 16:03:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:31.859 16:03:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:31.859 16:03:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.859 16:03:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:31.859 16:03:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.859 16:03:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:31.859 "name": "raid_bdev1", 00:33:31.859 "uuid": "60fb6ba1-7aba-4ca2-a8ff-4e0387c1b001", 00:33:31.859 "strip_size_kb": 0, 00:33:31.859 "state": "online", 00:33:31.859 "raid_level": "raid1", 00:33:31.859 "superblock": true, 00:33:31.859 "num_base_bdevs": 2, 00:33:31.859 "num_base_bdevs_discovered": 1, 00:33:31.859 "num_base_bdevs_operational": 1, 00:33:31.859 "base_bdevs_list": [ 00:33:31.859 { 00:33:31.859 "name": null, 00:33:31.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:31.859 "is_configured": false, 00:33:31.859 "data_offset": 0, 00:33:31.859 "data_size": 7936 00:33:31.859 }, 00:33:31.859 { 00:33:31.859 "name": "BaseBdev2", 00:33:31.859 "uuid": "b0a7af79-2826-58c3-85be-e86fea72b6a5", 00:33:31.859 "is_configured": true, 00:33:31.859 "data_offset": 256, 00:33:31.859 "data_size": 7936 00:33:31.859 } 00:33:31.859 ] 00:33:31.859 }' 00:33:31.859 16:03:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:32.117 16:03:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:32.117 16:03:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:32.117 16:03:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:32.117 16:03:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 84918 00:33:32.117 16:03:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 84918 ']' 00:33:32.117 16:03:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 84918 00:33:32.117 16:03:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:33:32.117 16:03:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:32.117 16:03:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84918 00:33:32.117 killing process with pid 84918 00:33:32.117 16:03:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:32.117 16:03:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:32.117 16:03:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84918' 00:33:32.117 16:03:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 84918 00:33:32.117 Received shutdown signal, test time was about 60.000000 seconds 00:33:32.117 00:33:32.117 Latency(us) 00:33:32.117 [2024-11-05T16:03:04.532Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:32.117 [2024-11-05T16:03:04.532Z] =================================================================================================================== 00:33:32.117 [2024-11-05T16:03:04.532Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:32.117 [2024-11-05 16:03:04.351398] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:32.117 16:03:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 84918 00:33:32.117 [2024-11-05 16:03:04.351516] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:32.117 [2024-11-05 16:03:04.351588] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:32.117 [2024-11-05 16:03:04.351659] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:33:32.117 [2024-11-05 16:03:04.507615] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:32.682 16:03:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:33:32.682 00:33:32.682 real 0m17.062s 00:33:32.682 user 0m21.827s 00:33:32.682 sys 0m1.841s 00:33:32.682 16:03:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:32.682 16:03:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:32.682 ************************************ 00:33:32.682 END TEST raid_rebuild_test_sb_md_separate 00:33:32.682 ************************************ 00:33:32.682 16:03:05 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:33:32.683 16:03:05 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:33:32.683 16:03:05 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:33:32.683 16:03:05 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:32.683 16:03:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:32.941 ************************************ 00:33:32.941 START TEST raid_state_function_test_sb_md_interleaved 00:33:32.941 ************************************ 00:33:32.941 16:03:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:33:32.941 16:03:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:33:32.941 16:03:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:33:32.941 16:03:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:33:32.941 16:03:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:33:32.941 16:03:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:33:32.941 16:03:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:32.941 16:03:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:33:32.941 16:03:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:32.941 16:03:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:32.941 16:03:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:33:32.941 16:03:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:32.941 16:03:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:32.941 16:03:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:33:32.941 16:03:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:33:32.941 16:03:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:33:32.941 16:03:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:33:32.941 16:03:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:33:32.941 16:03:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:33:32.941 16:03:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:33:32.941 16:03:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:33:32.941 16:03:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:33:32.941 16:03:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:33:32.941 Process raid pid: 85586 00:33:32.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:32.941 16:03:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=85586 00:33:32.941 16:03:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85586' 00:33:32.941 16:03:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 85586 00:33:32.941 16:03:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 85586 ']' 00:33:32.941 16:03:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:32.941 16:03:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:32.941 16:03:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:32.941 16:03:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:32.941 16:03:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:32.941 16:03:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:33:32.942 [2024-11-05 16:03:05.179252] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:33:32.942 [2024-11-05 16:03:05.179367] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:32.942 [2024-11-05 16:03:05.336178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:33.200 [2024-11-05 16:03:05.417199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:33.200 [2024-11-05 16:03:05.524251] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:33.200 [2024-11-05 16:03:05.524282] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:33.766 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:33.766 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:33:33.766 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:33:33.766 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.766 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:33.766 [2024-11-05 16:03:06.027965] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:33.766 [2024-11-05 16:03:06.028004] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:33.766 [2024-11-05 16:03:06.028012] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:33.766 [2024-11-05 16:03:06.028020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:33.766 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.766 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:33:33.766 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:33.766 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:33.766 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:33.766 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:33.767 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:33.767 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:33.767 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:33.767 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:33.767 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:33.767 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:33.767 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.767 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:33.767 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:33.767 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.767 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:33.767 "name": "Existed_Raid", 00:33:33.767 "uuid": "d57454e2-3e5a-46b7-8142-5a77055df55b", 00:33:33.767 "strip_size_kb": 0, 00:33:33.767 "state": "configuring", 00:33:33.767 "raid_level": "raid1", 00:33:33.767 "superblock": true, 00:33:33.767 "num_base_bdevs": 2, 00:33:33.767 "num_base_bdevs_discovered": 0, 00:33:33.767 "num_base_bdevs_operational": 2, 00:33:33.767 "base_bdevs_list": [ 00:33:33.767 { 00:33:33.767 "name": "BaseBdev1", 00:33:33.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:33.767 "is_configured": false, 00:33:33.767 "data_offset": 0, 00:33:33.767 "data_size": 0 00:33:33.767 }, 00:33:33.767 { 00:33:33.767 "name": "BaseBdev2", 00:33:33.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:33.767 "is_configured": false, 00:33:33.767 "data_offset": 0, 00:33:33.767 "data_size": 0 00:33:33.767 } 00:33:33.767 ] 00:33:33.767 }' 00:33:33.767 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:33.767 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:34.026 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:34.026 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.026 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:34.026 [2024-11-05 16:03:06.339991] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:34.026 [2024-11-05 16:03:06.340019] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:33:34.026 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.026 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:33:34.026 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.026 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:34.026 [2024-11-05 16:03:06.347988] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:34.026 [2024-11-05 16:03:06.348020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:34.026 [2024-11-05 16:03:06.348027] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:34.026 [2024-11-05 16:03:06.348035] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:34.026 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.026 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:33:34.026 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.026 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:34.026 [2024-11-05 16:03:06.375015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:34.026 BaseBdev1 00:33:34.026 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.026 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:33:34.026 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:33:34.026 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:33:34.026 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:33:34.026 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:33:34.026 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:33:34.026 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:33:34.026 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.026 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:34.026 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.026 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:34.026 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.026 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:34.026 [ 00:33:34.026 { 00:33:34.026 "name": "BaseBdev1", 00:33:34.026 "aliases": [ 00:33:34.026 "450c3f71-5009-420b-9e14-8c99f0e39734" 00:33:34.026 ], 00:33:34.026 "product_name": "Malloc disk", 00:33:34.026 "block_size": 4128, 00:33:34.026 "num_blocks": 8192, 00:33:34.026 "uuid": "450c3f71-5009-420b-9e14-8c99f0e39734", 00:33:34.026 "md_size": 32, 00:33:34.026 "md_interleave": true, 00:33:34.026 "dif_type": 0, 00:33:34.026 "assigned_rate_limits": { 00:33:34.026 "rw_ios_per_sec": 0, 00:33:34.026 "rw_mbytes_per_sec": 0, 00:33:34.026 "r_mbytes_per_sec": 0, 00:33:34.026 "w_mbytes_per_sec": 0 00:33:34.026 }, 00:33:34.026 "claimed": true, 00:33:34.026 "claim_type": "exclusive_write", 00:33:34.026 "zoned": false, 00:33:34.026 "supported_io_types": { 00:33:34.026 "read": true, 00:33:34.026 "write": true, 00:33:34.026 "unmap": true, 00:33:34.026 "flush": true, 00:33:34.026 "reset": true, 00:33:34.026 "nvme_admin": false, 00:33:34.026 "nvme_io": false, 00:33:34.026 "nvme_io_md": false, 00:33:34.026 "write_zeroes": true, 00:33:34.026 "zcopy": true, 00:33:34.026 "get_zone_info": false, 00:33:34.026 "zone_management": false, 00:33:34.026 "zone_append": false, 00:33:34.026 "compare": false, 00:33:34.026 "compare_and_write": false, 00:33:34.026 "abort": true, 00:33:34.026 "seek_hole": false, 00:33:34.026 "seek_data": false, 00:33:34.026 "copy": true, 00:33:34.026 "nvme_iov_md": false 00:33:34.026 }, 00:33:34.026 "memory_domains": [ 00:33:34.026 { 00:33:34.026 "dma_device_id": "system", 00:33:34.026 "dma_device_type": 1 00:33:34.026 }, 00:33:34.026 { 00:33:34.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:34.026 "dma_device_type": 2 00:33:34.026 } 00:33:34.026 ], 00:33:34.026 "driver_specific": {} 00:33:34.026 } 00:33:34.026 ] 00:33:34.026 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.026 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:33:34.026 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:33:34.026 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:34.026 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:34.026 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:34.026 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:34.027 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:34.027 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:34.027 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:34.027 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:34.027 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:34.027 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:34.027 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:34.027 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.027 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:34.027 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.027 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:34.027 "name": "Existed_Raid", 00:33:34.027 "uuid": "f782a1d0-c934-46be-ae90-10062f3ba3ac", 00:33:34.027 "strip_size_kb": 0, 00:33:34.027 "state": "configuring", 00:33:34.027 "raid_level": "raid1", 00:33:34.027 "superblock": true, 00:33:34.027 "num_base_bdevs": 2, 00:33:34.027 "num_base_bdevs_discovered": 1, 00:33:34.027 "num_base_bdevs_operational": 2, 00:33:34.027 "base_bdevs_list": [ 00:33:34.027 { 00:33:34.027 "name": "BaseBdev1", 00:33:34.027 "uuid": "450c3f71-5009-420b-9e14-8c99f0e39734", 00:33:34.027 "is_configured": true, 00:33:34.027 "data_offset": 256, 00:33:34.027 "data_size": 7936 00:33:34.027 }, 00:33:34.027 { 00:33:34.027 "name": "BaseBdev2", 00:33:34.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:34.027 "is_configured": false, 00:33:34.027 "data_offset": 0, 00:33:34.027 "data_size": 0 00:33:34.027 } 00:33:34.027 ] 00:33:34.027 }' 00:33:34.027 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:34.027 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:34.593 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:34.593 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.593 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:34.593 [2024-11-05 16:03:06.731124] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:34.593 [2024-11-05 16:03:06.731267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:33:34.593 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.593 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:33:34.593 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.593 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:34.593 [2024-11-05 16:03:06.739183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:34.593 [2024-11-05 16:03:06.740717] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:34.593 [2024-11-05 16:03:06.740822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:34.593 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.593 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:33:34.593 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:34.593 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:33:34.593 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:34.593 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:34.593 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:34.593 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:34.593 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:34.593 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:34.593 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:34.593 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:34.593 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:34.593 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:34.593 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:34.593 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.593 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:34.593 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.593 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:34.593 "name": "Existed_Raid", 00:33:34.593 "uuid": "27028c95-c8b4-423f-8b7d-a43cf0572e25", 00:33:34.593 "strip_size_kb": 0, 00:33:34.593 "state": "configuring", 00:33:34.593 "raid_level": "raid1", 00:33:34.593 "superblock": true, 00:33:34.593 "num_base_bdevs": 2, 00:33:34.593 "num_base_bdevs_discovered": 1, 00:33:34.593 "num_base_bdevs_operational": 2, 00:33:34.593 "base_bdevs_list": [ 00:33:34.593 { 00:33:34.593 "name": "BaseBdev1", 00:33:34.593 "uuid": "450c3f71-5009-420b-9e14-8c99f0e39734", 00:33:34.593 "is_configured": true, 00:33:34.593 "data_offset": 256, 00:33:34.593 "data_size": 7936 00:33:34.593 }, 00:33:34.593 { 00:33:34.593 "name": "BaseBdev2", 00:33:34.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:34.593 "is_configured": false, 00:33:34.593 "data_offset": 0, 00:33:34.593 "data_size": 0 00:33:34.593 } 00:33:34.593 ] 00:33:34.593 }' 00:33:34.593 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:34.593 16:03:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:34.852 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:33:34.852 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.853 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:34.853 [2024-11-05 16:03:07.069006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:34.853 [2024-11-05 16:03:07.069150] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:33:34.853 [2024-11-05 16:03:07.069160] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:34.853 [2024-11-05 16:03:07.069220] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:33:34.853 [2024-11-05 16:03:07.069272] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:33:34.853 [2024-11-05 16:03:07.069280] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:33:34.853 BaseBdev2 00:33:34.853 [2024-11-05 16:03:07.069324] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:34.853 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.853 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:33:34.853 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:33:34.853 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:33:34.853 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:33:34.853 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:33:34.853 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:33:34.853 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:33:34.853 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.853 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:34.853 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.853 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:34.853 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.853 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:34.853 [ 00:33:34.853 { 00:33:34.853 "name": "BaseBdev2", 00:33:34.853 "aliases": [ 00:33:34.853 "2d658127-b9e1-45be-a515-e126a304f992" 00:33:34.853 ], 00:33:34.853 "product_name": "Malloc disk", 00:33:34.853 "block_size": 4128, 00:33:34.853 "num_blocks": 8192, 00:33:34.853 "uuid": "2d658127-b9e1-45be-a515-e126a304f992", 00:33:34.853 "md_size": 32, 00:33:34.853 "md_interleave": true, 00:33:34.853 "dif_type": 0, 00:33:34.853 "assigned_rate_limits": { 00:33:34.853 "rw_ios_per_sec": 0, 00:33:34.853 "rw_mbytes_per_sec": 0, 00:33:34.853 "r_mbytes_per_sec": 0, 00:33:34.853 "w_mbytes_per_sec": 0 00:33:34.853 }, 00:33:34.853 "claimed": true, 00:33:34.853 "claim_type": "exclusive_write", 00:33:34.853 "zoned": false, 00:33:34.853 "supported_io_types": { 00:33:34.853 "read": true, 00:33:34.853 "write": true, 00:33:34.853 "unmap": true, 00:33:34.853 "flush": true, 00:33:34.853 "reset": true, 00:33:34.853 "nvme_admin": false, 00:33:34.853 "nvme_io": false, 00:33:34.853 "nvme_io_md": false, 00:33:34.853 "write_zeroes": true, 00:33:34.853 "zcopy": true, 00:33:34.853 "get_zone_info": false, 00:33:34.853 "zone_management": false, 00:33:34.853 "zone_append": false, 00:33:34.853 "compare": false, 00:33:34.853 "compare_and_write": false, 00:33:34.853 "abort": true, 00:33:34.853 "seek_hole": false, 00:33:34.853 "seek_data": false, 00:33:34.853 "copy": true, 00:33:34.853 "nvme_iov_md": false 00:33:34.853 }, 00:33:34.853 "memory_domains": [ 00:33:34.853 { 00:33:34.853 "dma_device_id": "system", 00:33:34.853 "dma_device_type": 1 00:33:34.853 }, 00:33:34.853 { 00:33:34.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:34.853 "dma_device_type": 2 00:33:34.853 } 00:33:34.853 ], 00:33:34.853 "driver_specific": {} 00:33:34.853 } 00:33:34.853 ] 00:33:34.853 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.853 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:33:34.853 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:33:34.853 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:34.853 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:33:34.853 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:34.853 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:34.853 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:34.853 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:34.853 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:34.853 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:34.853 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:34.853 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:34.853 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:34.853 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:34.853 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.853 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:34.853 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:34.853 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.853 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:34.853 "name": "Existed_Raid", 00:33:34.853 "uuid": "27028c95-c8b4-423f-8b7d-a43cf0572e25", 00:33:34.853 "strip_size_kb": 0, 00:33:34.853 "state": "online", 00:33:34.853 "raid_level": "raid1", 00:33:34.853 "superblock": true, 00:33:34.853 "num_base_bdevs": 2, 00:33:34.853 "num_base_bdevs_discovered": 2, 00:33:34.853 "num_base_bdevs_operational": 2, 00:33:34.853 "base_bdevs_list": [ 00:33:34.853 { 00:33:34.853 "name": "BaseBdev1", 00:33:34.853 "uuid": "450c3f71-5009-420b-9e14-8c99f0e39734", 00:33:34.853 "is_configured": true, 00:33:34.853 "data_offset": 256, 00:33:34.853 "data_size": 7936 00:33:34.853 }, 00:33:34.853 { 00:33:34.853 "name": "BaseBdev2", 00:33:34.853 "uuid": "2d658127-b9e1-45be-a515-e126a304f992", 00:33:34.853 "is_configured": true, 00:33:34.853 "data_offset": 256, 00:33:34.853 "data_size": 7936 00:33:34.853 } 00:33:34.853 ] 00:33:34.853 }' 00:33:34.853 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:34.853 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:35.112 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:33:35.112 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:33:35.112 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:35.112 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:35.112 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:33:35.112 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:35.112 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:33:35.112 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:35.112 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.112 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:35.112 [2024-11-05 16:03:07.421364] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:35.112 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.112 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:35.112 "name": "Existed_Raid", 00:33:35.112 "aliases": [ 00:33:35.112 "27028c95-c8b4-423f-8b7d-a43cf0572e25" 00:33:35.112 ], 00:33:35.112 "product_name": "Raid Volume", 00:33:35.112 "block_size": 4128, 00:33:35.112 "num_blocks": 7936, 00:33:35.112 "uuid": "27028c95-c8b4-423f-8b7d-a43cf0572e25", 00:33:35.112 "md_size": 32, 00:33:35.112 "md_interleave": true, 00:33:35.112 "dif_type": 0, 00:33:35.112 "assigned_rate_limits": { 00:33:35.112 "rw_ios_per_sec": 0, 00:33:35.112 "rw_mbytes_per_sec": 0, 00:33:35.112 "r_mbytes_per_sec": 0, 00:33:35.112 "w_mbytes_per_sec": 0 00:33:35.112 }, 00:33:35.112 "claimed": false, 00:33:35.112 "zoned": false, 00:33:35.112 "supported_io_types": { 00:33:35.112 "read": true, 00:33:35.112 "write": true, 00:33:35.112 "unmap": false, 00:33:35.112 "flush": false, 00:33:35.112 "reset": true, 00:33:35.112 "nvme_admin": false, 00:33:35.112 "nvme_io": false, 00:33:35.112 "nvme_io_md": false, 00:33:35.112 "write_zeroes": true, 00:33:35.112 "zcopy": false, 00:33:35.112 "get_zone_info": false, 00:33:35.112 "zone_management": false, 00:33:35.112 "zone_append": false, 00:33:35.112 "compare": false, 00:33:35.112 "compare_and_write": false, 00:33:35.112 "abort": false, 00:33:35.112 "seek_hole": false, 00:33:35.112 "seek_data": false, 00:33:35.112 "copy": false, 00:33:35.112 "nvme_iov_md": false 00:33:35.112 }, 00:33:35.112 "memory_domains": [ 00:33:35.112 { 00:33:35.112 "dma_device_id": "system", 00:33:35.112 "dma_device_type": 1 00:33:35.112 }, 00:33:35.112 { 00:33:35.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:35.112 "dma_device_type": 2 00:33:35.112 }, 00:33:35.112 { 00:33:35.112 "dma_device_id": "system", 00:33:35.112 "dma_device_type": 1 00:33:35.112 }, 00:33:35.112 { 00:33:35.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:35.112 "dma_device_type": 2 00:33:35.112 } 00:33:35.112 ], 00:33:35.112 "driver_specific": { 00:33:35.112 "raid": { 00:33:35.112 "uuid": "27028c95-c8b4-423f-8b7d-a43cf0572e25", 00:33:35.112 "strip_size_kb": 0, 00:33:35.112 "state": "online", 00:33:35.112 "raid_level": "raid1", 00:33:35.112 "superblock": true, 00:33:35.112 "num_base_bdevs": 2, 00:33:35.112 "num_base_bdevs_discovered": 2, 00:33:35.112 "num_base_bdevs_operational": 2, 00:33:35.112 "base_bdevs_list": [ 00:33:35.112 { 00:33:35.112 "name": "BaseBdev1", 00:33:35.112 "uuid": "450c3f71-5009-420b-9e14-8c99f0e39734", 00:33:35.112 "is_configured": true, 00:33:35.112 "data_offset": 256, 00:33:35.112 "data_size": 7936 00:33:35.112 }, 00:33:35.112 { 00:33:35.112 "name": "BaseBdev2", 00:33:35.112 "uuid": "2d658127-b9e1-45be-a515-e126a304f992", 00:33:35.112 "is_configured": true, 00:33:35.112 "data_offset": 256, 00:33:35.112 "data_size": 7936 00:33:35.112 } 00:33:35.112 ] 00:33:35.112 } 00:33:35.112 } 00:33:35.112 }' 00:33:35.112 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:35.112 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:33:35.112 BaseBdev2' 00:33:35.112 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:35.112 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:33:35.112 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:35.112 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:35.112 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:33:35.112 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.112 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:35.112 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.371 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:33:35.371 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:33:35.371 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:35.371 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:33:35.371 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.371 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:35.371 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:35.371 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.371 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:33:35.371 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:33:35.371 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:33:35.371 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.371 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:35.371 [2024-11-05 16:03:07.585164] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:35.371 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.371 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:33:35.371 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:33:35.371 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:35.371 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:33:35.371 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:33:35.371 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:33:35.371 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:35.371 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:35.371 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:35.371 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:35.371 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:35.371 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:35.371 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:35.371 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:35.371 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:35.371 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:35.371 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:35.371 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.371 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:35.371 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.371 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:35.371 "name": "Existed_Raid", 00:33:35.371 "uuid": "27028c95-c8b4-423f-8b7d-a43cf0572e25", 00:33:35.371 "strip_size_kb": 0, 00:33:35.371 "state": "online", 00:33:35.371 "raid_level": "raid1", 00:33:35.371 "superblock": true, 00:33:35.371 "num_base_bdevs": 2, 00:33:35.371 "num_base_bdevs_discovered": 1, 00:33:35.371 "num_base_bdevs_operational": 1, 00:33:35.371 "base_bdevs_list": [ 00:33:35.371 { 00:33:35.371 "name": null, 00:33:35.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:35.371 "is_configured": false, 00:33:35.371 "data_offset": 0, 00:33:35.371 "data_size": 7936 00:33:35.371 }, 00:33:35.371 { 00:33:35.371 "name": "BaseBdev2", 00:33:35.371 "uuid": "2d658127-b9e1-45be-a515-e126a304f992", 00:33:35.371 "is_configured": true, 00:33:35.371 "data_offset": 256, 00:33:35.371 "data_size": 7936 00:33:35.371 } 00:33:35.371 ] 00:33:35.371 }' 00:33:35.371 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:35.371 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:35.630 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:33:35.630 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:35.630 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:35.630 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.630 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:35.630 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:33:35.630 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.630 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:33:35.630 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:35.630 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:33:35.630 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.630 16:03:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:35.630 [2024-11-05 16:03:07.997976] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:35.630 [2024-11-05 16:03:07.998056] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:35.630 [2024-11-05 16:03:08.043933] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:35.630 [2024-11-05 16:03:08.043966] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:35.630 [2024-11-05 16:03:08.043975] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:33:35.630 16:03:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.630 16:03:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:33:35.630 16:03:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:35.888 16:03:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:33:35.888 16:03:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:35.888 16:03:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.888 16:03:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:35.888 16:03:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.888 16:03:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:33:35.888 16:03:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:33:35.888 16:03:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:33:35.888 16:03:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 85586 00:33:35.888 16:03:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 85586 ']' 00:33:35.888 16:03:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 85586 00:33:35.888 16:03:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:33:35.888 16:03:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:35.888 16:03:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85586 00:33:35.888 killing process with pid 85586 00:33:35.888 16:03:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:35.888 16:03:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:35.888 16:03:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85586' 00:33:35.888 16:03:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 85586 00:33:35.888 [2024-11-05 16:03:08.109536] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:35.888 16:03:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 85586 00:33:35.888 [2024-11-05 16:03:08.117761] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:36.453 16:03:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:33:36.453 00:33:36.453 real 0m3.553s 00:33:36.453 user 0m5.244s 00:33:36.453 sys 0m0.563s 00:33:36.453 16:03:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:36.453 16:03:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:36.453 ************************************ 00:33:36.453 END TEST raid_state_function_test_sb_md_interleaved 00:33:36.453 ************************************ 00:33:36.453 16:03:08 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:33:36.454 16:03:08 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:36.454 16:03:08 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:36.454 16:03:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:36.454 ************************************ 00:33:36.454 START TEST raid_superblock_test_md_interleaved 00:33:36.454 ************************************ 00:33:36.454 16:03:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:33:36.454 16:03:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:33:36.454 16:03:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:33:36.454 16:03:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:33:36.454 16:03:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:33:36.454 16:03:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:33:36.454 16:03:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:33:36.454 16:03:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:33:36.454 16:03:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:33:36.454 16:03:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:33:36.454 16:03:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:33:36.454 16:03:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:33:36.454 16:03:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:33:36.454 16:03:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:33:36.454 16:03:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:33:36.454 16:03:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:33:36.454 16:03:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=85816 00:33:36.454 16:03:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 85816 00:33:36.454 16:03:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:33:36.454 16:03:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 85816 ']' 00:33:36.454 16:03:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:36.454 16:03:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:36.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:36.454 16:03:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:36.454 16:03:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:36.454 16:03:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:36.454 [2024-11-05 16:03:08.798797] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:33:36.454 [2024-11-05 16:03:08.799083] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85816 ] 00:33:36.712 [2024-11-05 16:03:08.953370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:36.712 [2024-11-05 16:03:09.031906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:36.970 [2024-11-05 16:03:09.137545] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:36.970 [2024-11-05 16:03:09.137677] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:37.229 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:37.229 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:33:37.229 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:33:37.229 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:33:37.229 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:33:37.229 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:33:37.229 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:33:37.229 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:37.229 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:33:37.229 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:37.229 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:33:37.229 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.229 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:37.487 malloc1 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:37.487 [2024-11-05 16:03:09.660808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:37.487 [2024-11-05 16:03:09.660954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:37.487 [2024-11-05 16:03:09.660977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:33:37.487 [2024-11-05 16:03:09.660986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:37.487 [2024-11-05 16:03:09.662479] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:37.487 [2024-11-05 16:03:09.662508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:37.487 pt1 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:37.487 malloc2 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:37.487 [2024-11-05 16:03:09.691319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:37.487 [2024-11-05 16:03:09.691356] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:37.487 [2024-11-05 16:03:09.691371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:33:37.487 [2024-11-05 16:03:09.691377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:37.487 [2024-11-05 16:03:09.692815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:37.487 [2024-11-05 16:03:09.692938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:37.487 pt2 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:37.487 [2024-11-05 16:03:09.699359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:37.487 [2024-11-05 16:03:09.700809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:37.487 [2024-11-05 16:03:09.701035] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:33:37.487 [2024-11-05 16:03:09.701098] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:37.487 [2024-11-05 16:03:09.701172] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:33:37.487 [2024-11-05 16:03:09.701280] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:33:37.487 [2024-11-05 16:03:09.701302] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:33:37.487 [2024-11-05 16:03:09.701407] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:37.487 "name": "raid_bdev1", 00:33:37.487 "uuid": "2e751b59-20e3-48f6-bc7a-63e3ed7e08d1", 00:33:37.487 "strip_size_kb": 0, 00:33:37.487 "state": "online", 00:33:37.487 "raid_level": "raid1", 00:33:37.487 "superblock": true, 00:33:37.487 "num_base_bdevs": 2, 00:33:37.487 "num_base_bdevs_discovered": 2, 00:33:37.487 "num_base_bdevs_operational": 2, 00:33:37.487 "base_bdevs_list": [ 00:33:37.487 { 00:33:37.487 "name": "pt1", 00:33:37.487 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:37.487 "is_configured": true, 00:33:37.487 "data_offset": 256, 00:33:37.487 "data_size": 7936 00:33:37.487 }, 00:33:37.487 { 00:33:37.487 "name": "pt2", 00:33:37.487 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:37.487 "is_configured": true, 00:33:37.487 "data_offset": 256, 00:33:37.487 "data_size": 7936 00:33:37.487 } 00:33:37.487 ] 00:33:37.487 }' 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:37.487 16:03:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:37.745 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:33:37.745 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:33:37.745 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:37.745 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:37.745 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:33:37.745 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:37.745 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:37.745 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:37.745 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.745 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:37.745 [2024-11-05 16:03:10.019659] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:37.745 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.745 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:37.745 "name": "raid_bdev1", 00:33:37.745 "aliases": [ 00:33:37.745 "2e751b59-20e3-48f6-bc7a-63e3ed7e08d1" 00:33:37.745 ], 00:33:37.745 "product_name": "Raid Volume", 00:33:37.745 "block_size": 4128, 00:33:37.745 "num_blocks": 7936, 00:33:37.745 "uuid": "2e751b59-20e3-48f6-bc7a-63e3ed7e08d1", 00:33:37.745 "md_size": 32, 00:33:37.745 "md_interleave": true, 00:33:37.745 "dif_type": 0, 00:33:37.745 "assigned_rate_limits": { 00:33:37.745 "rw_ios_per_sec": 0, 00:33:37.745 "rw_mbytes_per_sec": 0, 00:33:37.745 "r_mbytes_per_sec": 0, 00:33:37.745 "w_mbytes_per_sec": 0 00:33:37.745 }, 00:33:37.745 "claimed": false, 00:33:37.745 "zoned": false, 00:33:37.745 "supported_io_types": { 00:33:37.745 "read": true, 00:33:37.745 "write": true, 00:33:37.745 "unmap": false, 00:33:37.745 "flush": false, 00:33:37.745 "reset": true, 00:33:37.745 "nvme_admin": false, 00:33:37.745 "nvme_io": false, 00:33:37.745 "nvme_io_md": false, 00:33:37.745 "write_zeroes": true, 00:33:37.745 "zcopy": false, 00:33:37.745 "get_zone_info": false, 00:33:37.745 "zone_management": false, 00:33:37.745 "zone_append": false, 00:33:37.745 "compare": false, 00:33:37.745 "compare_and_write": false, 00:33:37.745 "abort": false, 00:33:37.746 "seek_hole": false, 00:33:37.746 "seek_data": false, 00:33:37.746 "copy": false, 00:33:37.746 "nvme_iov_md": false 00:33:37.746 }, 00:33:37.746 "memory_domains": [ 00:33:37.746 { 00:33:37.746 "dma_device_id": "system", 00:33:37.746 "dma_device_type": 1 00:33:37.746 }, 00:33:37.746 { 00:33:37.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:37.746 "dma_device_type": 2 00:33:37.746 }, 00:33:37.746 { 00:33:37.746 "dma_device_id": "system", 00:33:37.746 "dma_device_type": 1 00:33:37.746 }, 00:33:37.746 { 00:33:37.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:37.746 "dma_device_type": 2 00:33:37.746 } 00:33:37.746 ], 00:33:37.746 "driver_specific": { 00:33:37.746 "raid": { 00:33:37.746 "uuid": "2e751b59-20e3-48f6-bc7a-63e3ed7e08d1", 00:33:37.746 "strip_size_kb": 0, 00:33:37.746 "state": "online", 00:33:37.746 "raid_level": "raid1", 00:33:37.746 "superblock": true, 00:33:37.746 "num_base_bdevs": 2, 00:33:37.746 "num_base_bdevs_discovered": 2, 00:33:37.746 "num_base_bdevs_operational": 2, 00:33:37.746 "base_bdevs_list": [ 00:33:37.746 { 00:33:37.746 "name": "pt1", 00:33:37.746 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:37.746 "is_configured": true, 00:33:37.746 "data_offset": 256, 00:33:37.746 "data_size": 7936 00:33:37.746 }, 00:33:37.746 { 00:33:37.746 "name": "pt2", 00:33:37.746 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:37.746 "is_configured": true, 00:33:37.746 "data_offset": 256, 00:33:37.746 "data_size": 7936 00:33:37.746 } 00:33:37.746 ] 00:33:37.746 } 00:33:37.746 } 00:33:37.746 }' 00:33:37.746 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:37.746 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:33:37.746 pt2' 00:33:37.746 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:37.746 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:33:37.746 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:37.746 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:33:37.746 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.746 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:37.746 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:37.746 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.746 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:33:37.746 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:33:37.746 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:37.746 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:33:37.746 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.746 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:37.746 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:37.746 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:33:38.005 [2024-11-05 16:03:10.183622] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2e751b59-20e3-48f6-bc7a-63e3ed7e08d1 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 2e751b59-20e3-48f6-bc7a-63e3ed7e08d1 ']' 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:38.005 [2024-11-05 16:03:10.215397] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:38.005 [2024-11-05 16:03:10.215414] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:38.005 [2024-11-05 16:03:10.215474] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:38.005 [2024-11-05 16:03:10.215523] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:38.005 [2024-11-05 16:03:10.215532] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.005 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:38.005 [2024-11-05 16:03:10.335446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:33:38.005 [2024-11-05 16:03:10.336934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:33:38.005 [2024-11-05 16:03:10.336991] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:33:38.005 [2024-11-05 16:03:10.337031] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:33:38.006 [2024-11-05 16:03:10.337043] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:38.006 [2024-11-05 16:03:10.337051] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:33:38.006 request: 00:33:38.006 { 00:33:38.006 "name": "raid_bdev1", 00:33:38.006 "raid_level": "raid1", 00:33:38.006 "base_bdevs": [ 00:33:38.006 "malloc1", 00:33:38.006 "malloc2" 00:33:38.006 ], 00:33:38.006 "superblock": false, 00:33:38.006 "method": "bdev_raid_create", 00:33:38.006 "req_id": 1 00:33:38.006 } 00:33:38.006 Got JSON-RPC error response 00:33:38.006 response: 00:33:38.006 { 00:33:38.006 "code": -17, 00:33:38.006 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:33:38.006 } 00:33:38.006 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:38.006 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:33:38.006 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:38.006 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:38.006 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:38.006 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:38.006 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.006 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:38.006 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:33:38.006 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.006 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:33:38.006 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:33:38.006 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:38.006 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.006 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:38.006 [2024-11-05 16:03:10.391432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:38.006 [2024-11-05 16:03:10.391470] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:38.006 [2024-11-05 16:03:10.391481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:33:38.006 [2024-11-05 16:03:10.391489] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:38.006 [2024-11-05 16:03:10.392973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:38.006 [2024-11-05 16:03:10.393000] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:38.006 [2024-11-05 16:03:10.393035] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:38.006 [2024-11-05 16:03:10.393076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:38.006 pt1 00:33:38.006 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.006 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:33:38.006 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:38.006 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:38.006 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:38.006 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:38.006 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:38.006 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:38.006 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:38.006 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:38.006 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:38.006 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:38.006 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:38.006 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.006 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:38.006 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.264 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:38.264 "name": "raid_bdev1", 00:33:38.264 "uuid": "2e751b59-20e3-48f6-bc7a-63e3ed7e08d1", 00:33:38.264 "strip_size_kb": 0, 00:33:38.264 "state": "configuring", 00:33:38.264 "raid_level": "raid1", 00:33:38.264 "superblock": true, 00:33:38.264 "num_base_bdevs": 2, 00:33:38.264 "num_base_bdevs_discovered": 1, 00:33:38.264 "num_base_bdevs_operational": 2, 00:33:38.264 "base_bdevs_list": [ 00:33:38.264 { 00:33:38.264 "name": "pt1", 00:33:38.264 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:38.264 "is_configured": true, 00:33:38.264 "data_offset": 256, 00:33:38.264 "data_size": 7936 00:33:38.264 }, 00:33:38.264 { 00:33:38.264 "name": null, 00:33:38.264 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:38.264 "is_configured": false, 00:33:38.264 "data_offset": 256, 00:33:38.264 "data_size": 7936 00:33:38.264 } 00:33:38.264 ] 00:33:38.264 }' 00:33:38.264 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:38.264 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:38.523 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:33:38.523 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:33:38.523 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:33:38.523 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:38.523 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.523 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:38.523 [2024-11-05 16:03:10.751513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:38.523 [2024-11-05 16:03:10.751560] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:38.523 [2024-11-05 16:03:10.751574] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:33:38.523 [2024-11-05 16:03:10.751582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:38.523 [2024-11-05 16:03:10.751702] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:38.523 [2024-11-05 16:03:10.751713] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:38.523 [2024-11-05 16:03:10.751746] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:38.523 [2024-11-05 16:03:10.751763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:38.523 [2024-11-05 16:03:10.751830] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:33:38.523 [2024-11-05 16:03:10.751838] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:38.523 [2024-11-05 16:03:10.751898] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:33:38.523 [2024-11-05 16:03:10.751948] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:33:38.523 [2024-11-05 16:03:10.751955] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:33:38.523 [2024-11-05 16:03:10.752003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:38.523 pt2 00:33:38.523 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.523 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:33:38.523 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:33:38.523 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:38.523 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:38.523 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:38.523 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:38.523 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:38.523 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:38.523 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:38.523 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:38.523 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:38.523 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:38.523 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:38.523 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.523 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:38.523 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:38.523 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.523 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:38.523 "name": "raid_bdev1", 00:33:38.523 "uuid": "2e751b59-20e3-48f6-bc7a-63e3ed7e08d1", 00:33:38.523 "strip_size_kb": 0, 00:33:38.523 "state": "online", 00:33:38.523 "raid_level": "raid1", 00:33:38.523 "superblock": true, 00:33:38.523 "num_base_bdevs": 2, 00:33:38.523 "num_base_bdevs_discovered": 2, 00:33:38.523 "num_base_bdevs_operational": 2, 00:33:38.523 "base_bdevs_list": [ 00:33:38.523 { 00:33:38.523 "name": "pt1", 00:33:38.523 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:38.523 "is_configured": true, 00:33:38.523 "data_offset": 256, 00:33:38.523 "data_size": 7936 00:33:38.523 }, 00:33:38.523 { 00:33:38.523 "name": "pt2", 00:33:38.523 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:38.523 "is_configured": true, 00:33:38.523 "data_offset": 256, 00:33:38.523 "data_size": 7936 00:33:38.523 } 00:33:38.523 ] 00:33:38.523 }' 00:33:38.523 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:38.523 16:03:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:38.783 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:33:38.783 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:33:38.783 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:38.783 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:38.783 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:33:38.783 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:38.783 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:38.783 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.783 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:38.783 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:38.783 [2024-11-05 16:03:11.063790] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:38.783 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.783 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:38.783 "name": "raid_bdev1", 00:33:38.783 "aliases": [ 00:33:38.783 "2e751b59-20e3-48f6-bc7a-63e3ed7e08d1" 00:33:38.783 ], 00:33:38.783 "product_name": "Raid Volume", 00:33:38.783 "block_size": 4128, 00:33:38.783 "num_blocks": 7936, 00:33:38.783 "uuid": "2e751b59-20e3-48f6-bc7a-63e3ed7e08d1", 00:33:38.783 "md_size": 32, 00:33:38.783 "md_interleave": true, 00:33:38.783 "dif_type": 0, 00:33:38.783 "assigned_rate_limits": { 00:33:38.783 "rw_ios_per_sec": 0, 00:33:38.783 "rw_mbytes_per_sec": 0, 00:33:38.783 "r_mbytes_per_sec": 0, 00:33:38.783 "w_mbytes_per_sec": 0 00:33:38.783 }, 00:33:38.783 "claimed": false, 00:33:38.783 "zoned": false, 00:33:38.783 "supported_io_types": { 00:33:38.783 "read": true, 00:33:38.783 "write": true, 00:33:38.783 "unmap": false, 00:33:38.783 "flush": false, 00:33:38.783 "reset": true, 00:33:38.783 "nvme_admin": false, 00:33:38.783 "nvme_io": false, 00:33:38.783 "nvme_io_md": false, 00:33:38.783 "write_zeroes": true, 00:33:38.783 "zcopy": false, 00:33:38.783 "get_zone_info": false, 00:33:38.783 "zone_management": false, 00:33:38.783 "zone_append": false, 00:33:38.783 "compare": false, 00:33:38.783 "compare_and_write": false, 00:33:38.783 "abort": false, 00:33:38.783 "seek_hole": false, 00:33:38.783 "seek_data": false, 00:33:38.783 "copy": false, 00:33:38.783 "nvme_iov_md": false 00:33:38.783 }, 00:33:38.783 "memory_domains": [ 00:33:38.783 { 00:33:38.783 "dma_device_id": "system", 00:33:38.783 "dma_device_type": 1 00:33:38.783 }, 00:33:38.783 { 00:33:38.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:38.783 "dma_device_type": 2 00:33:38.783 }, 00:33:38.783 { 00:33:38.783 "dma_device_id": "system", 00:33:38.783 "dma_device_type": 1 00:33:38.783 }, 00:33:38.783 { 00:33:38.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:38.783 "dma_device_type": 2 00:33:38.783 } 00:33:38.783 ], 00:33:38.783 "driver_specific": { 00:33:38.783 "raid": { 00:33:38.783 "uuid": "2e751b59-20e3-48f6-bc7a-63e3ed7e08d1", 00:33:38.783 "strip_size_kb": 0, 00:33:38.783 "state": "online", 00:33:38.783 "raid_level": "raid1", 00:33:38.783 "superblock": true, 00:33:38.783 "num_base_bdevs": 2, 00:33:38.783 "num_base_bdevs_discovered": 2, 00:33:38.783 "num_base_bdevs_operational": 2, 00:33:38.783 "base_bdevs_list": [ 00:33:38.783 { 00:33:38.783 "name": "pt1", 00:33:38.783 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:38.783 "is_configured": true, 00:33:38.783 "data_offset": 256, 00:33:38.783 "data_size": 7936 00:33:38.783 }, 00:33:38.783 { 00:33:38.783 "name": "pt2", 00:33:38.783 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:38.783 "is_configured": true, 00:33:38.783 "data_offset": 256, 00:33:38.783 "data_size": 7936 00:33:38.783 } 00:33:38.783 ] 00:33:38.783 } 00:33:38.783 } 00:33:38.783 }' 00:33:38.783 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:38.783 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:33:38.783 pt2' 00:33:38.783 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:38.783 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:33:38.783 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:38.783 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:38.783 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:33:38.783 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.783 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:38.783 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.784 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:33:38.784 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:33:38.784 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:38.784 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:33:38.784 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.784 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:38.784 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:38.784 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.042 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:33:39.042 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:33:39.042 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:39.042 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:33:39.042 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.042 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:39.042 [2024-11-05 16:03:11.215793] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:39.042 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.042 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 2e751b59-20e3-48f6-bc7a-63e3ed7e08d1 '!=' 2e751b59-20e3-48f6-bc7a-63e3ed7e08d1 ']' 00:33:39.042 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:33:39.042 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:39.042 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:33:39.042 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:33:39.042 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.042 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:39.042 [2024-11-05 16:03:11.239611] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:33:39.042 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.042 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:39.042 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:39.042 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:39.042 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:39.042 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:39.042 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:39.042 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:39.042 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:39.042 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:39.042 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:39.042 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:39.042 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:39.042 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.042 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:39.042 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.042 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:39.042 "name": "raid_bdev1", 00:33:39.042 "uuid": "2e751b59-20e3-48f6-bc7a-63e3ed7e08d1", 00:33:39.042 "strip_size_kb": 0, 00:33:39.042 "state": "online", 00:33:39.042 "raid_level": "raid1", 00:33:39.042 "superblock": true, 00:33:39.042 "num_base_bdevs": 2, 00:33:39.042 "num_base_bdevs_discovered": 1, 00:33:39.042 "num_base_bdevs_operational": 1, 00:33:39.042 "base_bdevs_list": [ 00:33:39.042 { 00:33:39.042 "name": null, 00:33:39.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:39.042 "is_configured": false, 00:33:39.042 "data_offset": 0, 00:33:39.042 "data_size": 7936 00:33:39.042 }, 00:33:39.042 { 00:33:39.042 "name": "pt2", 00:33:39.042 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:39.042 "is_configured": true, 00:33:39.042 "data_offset": 256, 00:33:39.042 "data_size": 7936 00:33:39.042 } 00:33:39.042 ] 00:33:39.042 }' 00:33:39.042 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:39.042 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:39.301 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:39.301 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.301 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:39.301 [2024-11-05 16:03:11.547660] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:39.301 [2024-11-05 16:03:11.547768] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:39.301 [2024-11-05 16:03:11.547882] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:39.302 [2024-11-05 16:03:11.548026] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:39.302 [2024-11-05 16:03:11.548101] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:39.302 [2024-11-05 16:03:11.599666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:39.302 [2024-11-05 16:03:11.599705] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:39.302 [2024-11-05 16:03:11.599716] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:33:39.302 [2024-11-05 16:03:11.599724] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:39.302 [2024-11-05 16:03:11.601259] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:39.302 [2024-11-05 16:03:11.601356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:39.302 [2024-11-05 16:03:11.601398] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:39.302 [2024-11-05 16:03:11.601433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:39.302 [2024-11-05 16:03:11.601482] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:33:39.302 [2024-11-05 16:03:11.601492] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:39.302 [2024-11-05 16:03:11.601561] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:33:39.302 [2024-11-05 16:03:11.601607] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:33:39.302 [2024-11-05 16:03:11.601613] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:33:39.302 [2024-11-05 16:03:11.601659] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:39.302 pt2 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:39.302 "name": "raid_bdev1", 00:33:39.302 "uuid": "2e751b59-20e3-48f6-bc7a-63e3ed7e08d1", 00:33:39.302 "strip_size_kb": 0, 00:33:39.302 "state": "online", 00:33:39.302 "raid_level": "raid1", 00:33:39.302 "superblock": true, 00:33:39.302 "num_base_bdevs": 2, 00:33:39.302 "num_base_bdevs_discovered": 1, 00:33:39.302 "num_base_bdevs_operational": 1, 00:33:39.302 "base_bdevs_list": [ 00:33:39.302 { 00:33:39.302 "name": null, 00:33:39.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:39.302 "is_configured": false, 00:33:39.302 "data_offset": 256, 00:33:39.302 "data_size": 7936 00:33:39.302 }, 00:33:39.302 { 00:33:39.302 "name": "pt2", 00:33:39.302 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:39.302 "is_configured": true, 00:33:39.302 "data_offset": 256, 00:33:39.302 "data_size": 7936 00:33:39.302 } 00:33:39.302 ] 00:33:39.302 }' 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:39.302 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:39.560 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:39.560 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.561 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:39.561 [2024-11-05 16:03:11.919701] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:39.561 [2024-11-05 16:03:11.919720] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:39.561 [2024-11-05 16:03:11.919765] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:39.561 [2024-11-05 16:03:11.919802] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:39.561 [2024-11-05 16:03:11.919809] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:33:39.561 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.561 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:39.561 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:33:39.561 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.561 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:39.561 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.561 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:33:39.561 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:33:39.561 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:33:39.561 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:39.561 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.561 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:39.561 [2024-11-05 16:03:11.959730] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:39.561 [2024-11-05 16:03:11.959863] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:39.561 [2024-11-05 16:03:11.959881] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:33:39.561 [2024-11-05 16:03:11.959888] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:39.561 [2024-11-05 16:03:11.961425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:39.561 [2024-11-05 16:03:11.961452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:39.561 [2024-11-05 16:03:11.961489] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:39.561 [2024-11-05 16:03:11.961522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:39.561 [2024-11-05 16:03:11.961590] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:33:39.561 [2024-11-05 16:03:11.961597] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:39.561 [2024-11-05 16:03:11.961609] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:33:39.561 [2024-11-05 16:03:11.961647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:39.561 [2024-11-05 16:03:11.961695] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:33:39.561 [2024-11-05 16:03:11.961702] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:39.561 [2024-11-05 16:03:11.961748] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:33:39.561 [2024-11-05 16:03:11.961792] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:33:39.561 [2024-11-05 16:03:11.961799] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:33:39.561 [2024-11-05 16:03:11.961861] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:39.561 pt1 00:33:39.561 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.561 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:33:39.561 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:39.561 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:39.561 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:39.561 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:39.561 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:39.561 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:39.561 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:39.561 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:39.561 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:39.561 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:39.561 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:39.561 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:39.561 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.561 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:39.819 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.819 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:39.819 "name": "raid_bdev1", 00:33:39.819 "uuid": "2e751b59-20e3-48f6-bc7a-63e3ed7e08d1", 00:33:39.819 "strip_size_kb": 0, 00:33:39.819 "state": "online", 00:33:39.819 "raid_level": "raid1", 00:33:39.819 "superblock": true, 00:33:39.819 "num_base_bdevs": 2, 00:33:39.819 "num_base_bdevs_discovered": 1, 00:33:39.819 "num_base_bdevs_operational": 1, 00:33:39.819 "base_bdevs_list": [ 00:33:39.819 { 00:33:39.819 "name": null, 00:33:39.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:39.819 "is_configured": false, 00:33:39.819 "data_offset": 256, 00:33:39.819 "data_size": 7936 00:33:39.819 }, 00:33:39.819 { 00:33:39.819 "name": "pt2", 00:33:39.819 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:39.819 "is_configured": true, 00:33:39.819 "data_offset": 256, 00:33:39.819 "data_size": 7936 00:33:39.819 } 00:33:39.819 ] 00:33:39.819 }' 00:33:39.819 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:39.819 16:03:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:40.078 16:03:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:33:40.078 16:03:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:33:40.078 16:03:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.078 16:03:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:40.078 16:03:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.078 16:03:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:33:40.078 16:03:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:40.078 16:03:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.078 16:03:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:40.078 16:03:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:33:40.078 [2024-11-05 16:03:12.323990] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:40.078 16:03:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.078 16:03:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 2e751b59-20e3-48f6-bc7a-63e3ed7e08d1 '!=' 2e751b59-20e3-48f6-bc7a-63e3ed7e08d1 ']' 00:33:40.078 16:03:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 85816 00:33:40.078 16:03:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 85816 ']' 00:33:40.078 16:03:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 85816 00:33:40.078 16:03:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:33:40.078 16:03:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:40.078 16:03:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85816 00:33:40.078 killing process with pid 85816 00:33:40.078 16:03:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:40.078 16:03:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:40.078 16:03:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85816' 00:33:40.078 16:03:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@971 -- # kill 85816 00:33:40.078 [2024-11-05 16:03:12.377220] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:40.078 [2024-11-05 16:03:12.377280] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:40.078 16:03:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@976 -- # wait 85816 00:33:40.078 [2024-11-05 16:03:12.377315] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:40.078 [2024-11-05 16:03:12.377326] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:33:40.078 [2024-11-05 16:03:12.474731] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:40.644 16:03:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:33:40.644 00:33:40.644 real 0m4.280s 00:33:40.644 user 0m6.649s 00:33:40.644 sys 0m0.672s 00:33:40.644 ************************************ 00:33:40.644 END TEST raid_superblock_test_md_interleaved 00:33:40.644 ************************************ 00:33:40.644 16:03:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:40.644 16:03:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:40.644 16:03:13 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:33:40.644 16:03:13 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:33:40.644 16:03:13 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:40.644 16:03:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:40.903 ************************************ 00:33:40.903 START TEST raid_rebuild_test_sb_md_interleaved 00:33:40.903 ************************************ 00:33:40.903 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false false 00:33:40.903 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:33:40.903 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:33:40.903 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:33:40.903 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:33:40.903 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:33:40.903 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:33:40.903 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:40.903 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:33:40.903 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:33:40.903 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:40.903 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:33:40.903 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:33:40.903 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:40.903 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:33:40.903 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:33:40.903 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:33:40.903 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:33:40.903 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:33:40.903 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:33:40.903 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:33:40.903 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:33:40.903 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:33:40.903 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:33:40.903 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:33:40.903 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=86128 00:33:40.903 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 86128 00:33:40.903 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 86128 ']' 00:33:40.903 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:40.903 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:40.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:40.903 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:40.903 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:40.903 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:40.903 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:33:40.903 [2024-11-05 16:03:13.139680] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:33:40.903 [2024-11-05 16:03:13.139932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:33:40.903 Zero copy mechanism will not be used. 00:33:40.903 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86128 ] 00:33:40.903 [2024-11-05 16:03:13.295887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:41.162 [2024-11-05 16:03:13.372613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:41.162 [2024-11-05 16:03:13.478908] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:41.162 [2024-11-05 16:03:13.479042] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:41.729 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:41.729 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:33:41.729 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:33:41.729 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:33:41.729 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.729 16:03:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:41.729 BaseBdev1_malloc 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:41.729 [2024-11-05 16:03:14.005125] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:41.729 [2024-11-05 16:03:14.005171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:41.729 [2024-11-05 16:03:14.005189] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:33:41.729 [2024-11-05 16:03:14.005198] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:41.729 [2024-11-05 16:03:14.006744] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:41.729 [2024-11-05 16:03:14.006870] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:41.729 BaseBdev1 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:41.729 BaseBdev2_malloc 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:41.729 [2024-11-05 16:03:14.036119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:33:41.729 [2024-11-05 16:03:14.036163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:41.729 [2024-11-05 16:03:14.036175] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:33:41.729 [2024-11-05 16:03:14.036185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:41.729 [2024-11-05 16:03:14.037618] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:41.729 [2024-11-05 16:03:14.037646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:41.729 BaseBdev2 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:41.729 spare_malloc 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:41.729 spare_delay 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:41.729 [2024-11-05 16:03:14.088401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:41.729 [2024-11-05 16:03:14.088527] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:41.729 [2024-11-05 16:03:14.088546] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:33:41.729 [2024-11-05 16:03:14.088554] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:41.729 [2024-11-05 16:03:14.090035] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:41.729 [2024-11-05 16:03:14.090062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:41.729 spare 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:41.729 [2024-11-05 16:03:14.096435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:41.729 [2024-11-05 16:03:14.097904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:41.729 [2024-11-05 16:03:14.098042] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:33:41.729 [2024-11-05 16:03:14.098052] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:41.729 [2024-11-05 16:03:14.098109] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:33:41.729 [2024-11-05 16:03:14.098162] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:33:41.729 [2024-11-05 16:03:14.098169] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:33:41.729 [2024-11-05 16:03:14.098218] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.729 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:41.730 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:41.730 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.730 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:41.730 "name": "raid_bdev1", 00:33:41.730 "uuid": "0ccaa07a-1825-4102-9ea1-c010a3a846df", 00:33:41.730 "strip_size_kb": 0, 00:33:41.730 "state": "online", 00:33:41.730 "raid_level": "raid1", 00:33:41.730 "superblock": true, 00:33:41.730 "num_base_bdevs": 2, 00:33:41.730 "num_base_bdevs_discovered": 2, 00:33:41.730 "num_base_bdevs_operational": 2, 00:33:41.730 "base_bdevs_list": [ 00:33:41.730 { 00:33:41.730 "name": "BaseBdev1", 00:33:41.730 "uuid": "ba787ae7-4e53-58e9-81d7-bebdf8c1d51e", 00:33:41.730 "is_configured": true, 00:33:41.730 "data_offset": 256, 00:33:41.730 "data_size": 7936 00:33:41.730 }, 00:33:41.730 { 00:33:41.730 "name": "BaseBdev2", 00:33:41.730 "uuid": "edbbb2c4-8227-5220-8857-28b615ccf27e", 00:33:41.730 "is_configured": true, 00:33:41.730 "data_offset": 256, 00:33:41.730 "data_size": 7936 00:33:41.730 } 00:33:41.730 ] 00:33:41.730 }' 00:33:41.730 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:41.730 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:42.296 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:33:42.296 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:42.296 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.296 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:42.296 [2024-11-05 16:03:14.416717] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:42.296 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.296 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:33:42.296 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:42.296 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.296 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:42.296 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:33:42.296 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.296 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:33:42.296 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:33:42.296 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:33:42.296 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:33:42.296 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.296 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:42.296 [2024-11-05 16:03:14.464473] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:42.296 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.296 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:42.296 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:42.296 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:42.296 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:42.296 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:42.296 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:42.296 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:42.296 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:42.296 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:42.296 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:42.296 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:42.296 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:42.296 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.296 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:42.296 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.296 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:42.296 "name": "raid_bdev1", 00:33:42.296 "uuid": "0ccaa07a-1825-4102-9ea1-c010a3a846df", 00:33:42.296 "strip_size_kb": 0, 00:33:42.296 "state": "online", 00:33:42.296 "raid_level": "raid1", 00:33:42.296 "superblock": true, 00:33:42.296 "num_base_bdevs": 2, 00:33:42.296 "num_base_bdevs_discovered": 1, 00:33:42.296 "num_base_bdevs_operational": 1, 00:33:42.296 "base_bdevs_list": [ 00:33:42.296 { 00:33:42.296 "name": null, 00:33:42.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:42.296 "is_configured": false, 00:33:42.296 "data_offset": 0, 00:33:42.296 "data_size": 7936 00:33:42.296 }, 00:33:42.296 { 00:33:42.296 "name": "BaseBdev2", 00:33:42.296 "uuid": "edbbb2c4-8227-5220-8857-28b615ccf27e", 00:33:42.296 "is_configured": true, 00:33:42.296 "data_offset": 256, 00:33:42.296 "data_size": 7936 00:33:42.296 } 00:33:42.296 ] 00:33:42.296 }' 00:33:42.296 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:42.296 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:42.575 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:33:42.575 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.575 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:42.575 [2024-11-05 16:03:14.780554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:42.575 [2024-11-05 16:03:14.789759] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:33:42.575 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.575 16:03:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:33:42.575 [2024-11-05 16:03:14.791339] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:43.508 16:03:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:43.508 16:03:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:43.508 16:03:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:43.508 16:03:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:43.508 16:03:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:43.508 16:03:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:43.508 16:03:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:43.508 16:03:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.508 16:03:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:43.508 16:03:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.508 16:03:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:43.508 "name": "raid_bdev1", 00:33:43.508 "uuid": "0ccaa07a-1825-4102-9ea1-c010a3a846df", 00:33:43.508 "strip_size_kb": 0, 00:33:43.508 "state": "online", 00:33:43.508 "raid_level": "raid1", 00:33:43.508 "superblock": true, 00:33:43.508 "num_base_bdevs": 2, 00:33:43.508 "num_base_bdevs_discovered": 2, 00:33:43.508 "num_base_bdevs_operational": 2, 00:33:43.508 "process": { 00:33:43.508 "type": "rebuild", 00:33:43.508 "target": "spare", 00:33:43.508 "progress": { 00:33:43.508 "blocks": 2560, 00:33:43.508 "percent": 32 00:33:43.509 } 00:33:43.509 }, 00:33:43.509 "base_bdevs_list": [ 00:33:43.509 { 00:33:43.509 "name": "spare", 00:33:43.509 "uuid": "1ace353c-0842-53c3-844a-01c6b48d4017", 00:33:43.509 "is_configured": true, 00:33:43.509 "data_offset": 256, 00:33:43.509 "data_size": 7936 00:33:43.509 }, 00:33:43.509 { 00:33:43.509 "name": "BaseBdev2", 00:33:43.509 "uuid": "edbbb2c4-8227-5220-8857-28b615ccf27e", 00:33:43.509 "is_configured": true, 00:33:43.509 "data_offset": 256, 00:33:43.509 "data_size": 7936 00:33:43.509 } 00:33:43.509 ] 00:33:43.509 }' 00:33:43.509 16:03:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:43.509 16:03:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:43.509 16:03:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:43.509 16:03:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:43.509 16:03:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:33:43.509 16:03:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.509 16:03:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:43.509 [2024-11-05 16:03:15.905704] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:43.767 [2024-11-05 16:03:15.996036] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:43.767 [2024-11-05 16:03:15.996086] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:43.767 [2024-11-05 16:03:15.996098] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:43.767 [2024-11-05 16:03:15.996108] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:43.767 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.767 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:43.767 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:43.767 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:43.767 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:43.767 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:43.767 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:43.767 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:43.767 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:43.767 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:43.767 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:43.767 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:43.767 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:43.767 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.767 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:43.767 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.767 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:43.767 "name": "raid_bdev1", 00:33:43.767 "uuid": "0ccaa07a-1825-4102-9ea1-c010a3a846df", 00:33:43.767 "strip_size_kb": 0, 00:33:43.767 "state": "online", 00:33:43.767 "raid_level": "raid1", 00:33:43.767 "superblock": true, 00:33:43.767 "num_base_bdevs": 2, 00:33:43.767 "num_base_bdevs_discovered": 1, 00:33:43.767 "num_base_bdevs_operational": 1, 00:33:43.767 "base_bdevs_list": [ 00:33:43.767 { 00:33:43.767 "name": null, 00:33:43.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:43.767 "is_configured": false, 00:33:43.767 "data_offset": 0, 00:33:43.767 "data_size": 7936 00:33:43.767 }, 00:33:43.767 { 00:33:43.767 "name": "BaseBdev2", 00:33:43.767 "uuid": "edbbb2c4-8227-5220-8857-28b615ccf27e", 00:33:43.767 "is_configured": true, 00:33:43.767 "data_offset": 256, 00:33:43.767 "data_size": 7936 00:33:43.767 } 00:33:43.767 ] 00:33:43.767 }' 00:33:43.767 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:43.767 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:44.025 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:44.025 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:44.025 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:44.025 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:44.025 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:44.025 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:44.025 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.025 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:44.025 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:44.025 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.025 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:44.025 "name": "raid_bdev1", 00:33:44.025 "uuid": "0ccaa07a-1825-4102-9ea1-c010a3a846df", 00:33:44.025 "strip_size_kb": 0, 00:33:44.025 "state": "online", 00:33:44.025 "raid_level": "raid1", 00:33:44.025 "superblock": true, 00:33:44.025 "num_base_bdevs": 2, 00:33:44.025 "num_base_bdevs_discovered": 1, 00:33:44.025 "num_base_bdevs_operational": 1, 00:33:44.025 "base_bdevs_list": [ 00:33:44.025 { 00:33:44.025 "name": null, 00:33:44.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:44.025 "is_configured": false, 00:33:44.025 "data_offset": 0, 00:33:44.025 "data_size": 7936 00:33:44.025 }, 00:33:44.025 { 00:33:44.025 "name": "BaseBdev2", 00:33:44.025 "uuid": "edbbb2c4-8227-5220-8857-28b615ccf27e", 00:33:44.025 "is_configured": true, 00:33:44.025 "data_offset": 256, 00:33:44.025 "data_size": 7936 00:33:44.025 } 00:33:44.025 ] 00:33:44.025 }' 00:33:44.025 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:44.025 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:44.025 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:44.025 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:44.025 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:33:44.025 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.025 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:44.025 [2024-11-05 16:03:16.434273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:44.284 [2024-11-05 16:03:16.442779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:33:44.284 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.284 16:03:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:33:44.284 [2024-11-05 16:03:16.444332] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:45.218 16:03:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:45.218 16:03:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:45.218 16:03:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:45.218 16:03:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:45.218 16:03:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:45.218 16:03:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:45.218 16:03:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:45.218 16:03:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.218 16:03:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:45.218 16:03:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.218 16:03:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:45.218 "name": "raid_bdev1", 00:33:45.218 "uuid": "0ccaa07a-1825-4102-9ea1-c010a3a846df", 00:33:45.218 "strip_size_kb": 0, 00:33:45.218 "state": "online", 00:33:45.218 "raid_level": "raid1", 00:33:45.218 "superblock": true, 00:33:45.218 "num_base_bdevs": 2, 00:33:45.218 "num_base_bdevs_discovered": 2, 00:33:45.218 "num_base_bdevs_operational": 2, 00:33:45.218 "process": { 00:33:45.218 "type": "rebuild", 00:33:45.218 "target": "spare", 00:33:45.218 "progress": { 00:33:45.218 "blocks": 2560, 00:33:45.218 "percent": 32 00:33:45.218 } 00:33:45.218 }, 00:33:45.218 "base_bdevs_list": [ 00:33:45.218 { 00:33:45.218 "name": "spare", 00:33:45.218 "uuid": "1ace353c-0842-53c3-844a-01c6b48d4017", 00:33:45.218 "is_configured": true, 00:33:45.218 "data_offset": 256, 00:33:45.218 "data_size": 7936 00:33:45.218 }, 00:33:45.218 { 00:33:45.218 "name": "BaseBdev2", 00:33:45.218 "uuid": "edbbb2c4-8227-5220-8857-28b615ccf27e", 00:33:45.218 "is_configured": true, 00:33:45.218 "data_offset": 256, 00:33:45.218 "data_size": 7936 00:33:45.218 } 00:33:45.218 ] 00:33:45.218 }' 00:33:45.218 16:03:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:45.218 16:03:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:45.218 16:03:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:45.218 16:03:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:45.218 16:03:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:33:45.218 16:03:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:33:45.218 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:33:45.218 16:03:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:33:45.218 16:03:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:33:45.218 16:03:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:33:45.218 16:03:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=573 00:33:45.218 16:03:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:45.218 16:03:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:45.218 16:03:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:45.218 16:03:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:45.218 16:03:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:45.218 16:03:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:45.218 16:03:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:45.218 16:03:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:45.218 16:03:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.218 16:03:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:45.218 16:03:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.218 16:03:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:45.218 "name": "raid_bdev1", 00:33:45.218 "uuid": "0ccaa07a-1825-4102-9ea1-c010a3a846df", 00:33:45.218 "strip_size_kb": 0, 00:33:45.218 "state": "online", 00:33:45.218 "raid_level": "raid1", 00:33:45.218 "superblock": true, 00:33:45.218 "num_base_bdevs": 2, 00:33:45.218 "num_base_bdevs_discovered": 2, 00:33:45.218 "num_base_bdevs_operational": 2, 00:33:45.218 "process": { 00:33:45.218 "type": "rebuild", 00:33:45.218 "target": "spare", 00:33:45.218 "progress": { 00:33:45.218 "blocks": 2816, 00:33:45.218 "percent": 35 00:33:45.218 } 00:33:45.218 }, 00:33:45.218 "base_bdevs_list": [ 00:33:45.218 { 00:33:45.218 "name": "spare", 00:33:45.218 "uuid": "1ace353c-0842-53c3-844a-01c6b48d4017", 00:33:45.218 "is_configured": true, 00:33:45.218 "data_offset": 256, 00:33:45.218 "data_size": 7936 00:33:45.218 }, 00:33:45.218 { 00:33:45.218 "name": "BaseBdev2", 00:33:45.218 "uuid": "edbbb2c4-8227-5220-8857-28b615ccf27e", 00:33:45.218 "is_configured": true, 00:33:45.218 "data_offset": 256, 00:33:45.218 "data_size": 7936 00:33:45.218 } 00:33:45.218 ] 00:33:45.218 }' 00:33:45.218 16:03:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:45.218 16:03:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:45.218 16:03:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:45.476 16:03:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:45.476 16:03:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:46.409 16:03:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:46.409 16:03:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:46.409 16:03:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:46.409 16:03:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:46.409 16:03:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:46.409 16:03:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:46.409 16:03:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:46.409 16:03:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.409 16:03:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:46.409 16:03:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:46.409 16:03:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.409 16:03:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:46.409 "name": "raid_bdev1", 00:33:46.409 "uuid": "0ccaa07a-1825-4102-9ea1-c010a3a846df", 00:33:46.409 "strip_size_kb": 0, 00:33:46.409 "state": "online", 00:33:46.409 "raid_level": "raid1", 00:33:46.409 "superblock": true, 00:33:46.409 "num_base_bdevs": 2, 00:33:46.409 "num_base_bdevs_discovered": 2, 00:33:46.409 "num_base_bdevs_operational": 2, 00:33:46.409 "process": { 00:33:46.409 "type": "rebuild", 00:33:46.409 "target": "spare", 00:33:46.409 "progress": { 00:33:46.409 "blocks": 5376, 00:33:46.409 "percent": 67 00:33:46.409 } 00:33:46.409 }, 00:33:46.409 "base_bdevs_list": [ 00:33:46.409 { 00:33:46.409 "name": "spare", 00:33:46.409 "uuid": "1ace353c-0842-53c3-844a-01c6b48d4017", 00:33:46.409 "is_configured": true, 00:33:46.409 "data_offset": 256, 00:33:46.409 "data_size": 7936 00:33:46.409 }, 00:33:46.409 { 00:33:46.409 "name": "BaseBdev2", 00:33:46.409 "uuid": "edbbb2c4-8227-5220-8857-28b615ccf27e", 00:33:46.409 "is_configured": true, 00:33:46.409 "data_offset": 256, 00:33:46.409 "data_size": 7936 00:33:46.409 } 00:33:46.409 ] 00:33:46.409 }' 00:33:46.409 16:03:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:46.409 16:03:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:46.409 16:03:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:46.409 16:03:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:46.409 16:03:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:47.341 [2024-11-05 16:03:19.556238] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:33:47.341 [2024-11-05 16:03:19.556299] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:33:47.341 [2024-11-05 16:03:19.556382] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:47.341 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:47.341 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:47.341 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:47.341 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:47.341 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:47.341 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:47.341 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:47.341 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:47.341 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.341 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:47.599 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.599 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:47.599 "name": "raid_bdev1", 00:33:47.599 "uuid": "0ccaa07a-1825-4102-9ea1-c010a3a846df", 00:33:47.599 "strip_size_kb": 0, 00:33:47.599 "state": "online", 00:33:47.599 "raid_level": "raid1", 00:33:47.599 "superblock": true, 00:33:47.599 "num_base_bdevs": 2, 00:33:47.599 "num_base_bdevs_discovered": 2, 00:33:47.599 "num_base_bdevs_operational": 2, 00:33:47.599 "base_bdevs_list": [ 00:33:47.599 { 00:33:47.599 "name": "spare", 00:33:47.599 "uuid": "1ace353c-0842-53c3-844a-01c6b48d4017", 00:33:47.599 "is_configured": true, 00:33:47.599 "data_offset": 256, 00:33:47.599 "data_size": 7936 00:33:47.599 }, 00:33:47.599 { 00:33:47.599 "name": "BaseBdev2", 00:33:47.599 "uuid": "edbbb2c4-8227-5220-8857-28b615ccf27e", 00:33:47.599 "is_configured": true, 00:33:47.599 "data_offset": 256, 00:33:47.599 "data_size": 7936 00:33:47.599 } 00:33:47.599 ] 00:33:47.599 }' 00:33:47.599 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:47.599 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:33:47.599 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:47.599 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:33:47.599 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:33:47.599 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:47.599 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:47.599 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:47.599 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:47.599 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:47.599 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:47.599 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:47.599 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.599 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:47.599 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.599 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:47.599 "name": "raid_bdev1", 00:33:47.599 "uuid": "0ccaa07a-1825-4102-9ea1-c010a3a846df", 00:33:47.599 "strip_size_kb": 0, 00:33:47.599 "state": "online", 00:33:47.599 "raid_level": "raid1", 00:33:47.599 "superblock": true, 00:33:47.599 "num_base_bdevs": 2, 00:33:47.599 "num_base_bdevs_discovered": 2, 00:33:47.599 "num_base_bdevs_operational": 2, 00:33:47.599 "base_bdevs_list": [ 00:33:47.599 { 00:33:47.599 "name": "spare", 00:33:47.599 "uuid": "1ace353c-0842-53c3-844a-01c6b48d4017", 00:33:47.599 "is_configured": true, 00:33:47.599 "data_offset": 256, 00:33:47.599 "data_size": 7936 00:33:47.599 }, 00:33:47.599 { 00:33:47.599 "name": "BaseBdev2", 00:33:47.599 "uuid": "edbbb2c4-8227-5220-8857-28b615ccf27e", 00:33:47.599 "is_configured": true, 00:33:47.600 "data_offset": 256, 00:33:47.600 "data_size": 7936 00:33:47.600 } 00:33:47.600 ] 00:33:47.600 }' 00:33:47.600 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:47.600 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:47.600 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:47.600 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:47.600 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:47.600 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:47.600 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:47.600 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:47.600 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:47.600 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:47.600 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:47.600 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:47.600 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:47.600 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:47.600 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:47.600 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.600 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:47.600 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:47.600 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.600 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:47.600 "name": "raid_bdev1", 00:33:47.600 "uuid": "0ccaa07a-1825-4102-9ea1-c010a3a846df", 00:33:47.600 "strip_size_kb": 0, 00:33:47.600 "state": "online", 00:33:47.600 "raid_level": "raid1", 00:33:47.600 "superblock": true, 00:33:47.600 "num_base_bdevs": 2, 00:33:47.600 "num_base_bdevs_discovered": 2, 00:33:47.600 "num_base_bdevs_operational": 2, 00:33:47.600 "base_bdevs_list": [ 00:33:47.600 { 00:33:47.600 "name": "spare", 00:33:47.600 "uuid": "1ace353c-0842-53c3-844a-01c6b48d4017", 00:33:47.600 "is_configured": true, 00:33:47.600 "data_offset": 256, 00:33:47.600 "data_size": 7936 00:33:47.600 }, 00:33:47.600 { 00:33:47.600 "name": "BaseBdev2", 00:33:47.600 "uuid": "edbbb2c4-8227-5220-8857-28b615ccf27e", 00:33:47.600 "is_configured": true, 00:33:47.600 "data_offset": 256, 00:33:47.600 "data_size": 7936 00:33:47.600 } 00:33:47.600 ] 00:33:47.600 }' 00:33:47.600 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:47.600 16:03:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:47.857 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:47.857 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.857 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:47.857 [2024-11-05 16:03:20.266608] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:47.857 [2024-11-05 16:03:20.266714] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:47.857 [2024-11-05 16:03:20.266860] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:47.857 [2024-11-05 16:03:20.267018] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:47.857 [2024-11-05 16:03:20.267032] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:33:47.857 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.857 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:47.857 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.857 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:48.115 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:33:48.115 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.115 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:33:48.115 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:33:48.115 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:33:48.115 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:33:48.115 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.115 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:48.115 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.115 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:33:48.115 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.115 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:48.115 [2024-11-05 16:03:20.314602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:48.115 [2024-11-05 16:03:20.314640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:48.115 [2024-11-05 16:03:20.314657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:33:48.115 [2024-11-05 16:03:20.314664] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:48.115 [2024-11-05 16:03:20.316269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:48.115 [2024-11-05 16:03:20.316363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:48.115 [2024-11-05 16:03:20.316413] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:33:48.115 [2024-11-05 16:03:20.316455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:48.115 [2024-11-05 16:03:20.316535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:48.115 spare 00:33:48.115 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.115 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:33:48.115 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.115 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:48.115 [2024-11-05 16:03:20.416599] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:33:48.115 [2024-11-05 16:03:20.416622] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:48.115 [2024-11-05 16:03:20.416698] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:33:48.115 [2024-11-05 16:03:20.416762] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:33:48.115 [2024-11-05 16:03:20.416768] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:33:48.115 [2024-11-05 16:03:20.416836] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:48.115 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.115 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:48.115 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:48.115 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:48.115 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:48.115 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:48.115 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:48.115 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:48.115 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:48.115 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:48.115 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:48.115 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:48.115 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:48.115 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.115 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:48.115 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.115 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:48.115 "name": "raid_bdev1", 00:33:48.115 "uuid": "0ccaa07a-1825-4102-9ea1-c010a3a846df", 00:33:48.115 "strip_size_kb": 0, 00:33:48.115 "state": "online", 00:33:48.115 "raid_level": "raid1", 00:33:48.115 "superblock": true, 00:33:48.115 "num_base_bdevs": 2, 00:33:48.115 "num_base_bdevs_discovered": 2, 00:33:48.115 "num_base_bdevs_operational": 2, 00:33:48.115 "base_bdevs_list": [ 00:33:48.115 { 00:33:48.115 "name": "spare", 00:33:48.115 "uuid": "1ace353c-0842-53c3-844a-01c6b48d4017", 00:33:48.115 "is_configured": true, 00:33:48.115 "data_offset": 256, 00:33:48.115 "data_size": 7936 00:33:48.115 }, 00:33:48.115 { 00:33:48.115 "name": "BaseBdev2", 00:33:48.115 "uuid": "edbbb2c4-8227-5220-8857-28b615ccf27e", 00:33:48.115 "is_configured": true, 00:33:48.115 "data_offset": 256, 00:33:48.115 "data_size": 7936 00:33:48.115 } 00:33:48.115 ] 00:33:48.115 }' 00:33:48.115 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:48.115 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:48.372 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:48.372 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:48.372 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:48.373 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:48.373 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:48.373 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:48.373 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.373 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:48.373 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:48.373 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.373 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:48.373 "name": "raid_bdev1", 00:33:48.373 "uuid": "0ccaa07a-1825-4102-9ea1-c010a3a846df", 00:33:48.373 "strip_size_kb": 0, 00:33:48.373 "state": "online", 00:33:48.373 "raid_level": "raid1", 00:33:48.373 "superblock": true, 00:33:48.373 "num_base_bdevs": 2, 00:33:48.373 "num_base_bdevs_discovered": 2, 00:33:48.373 "num_base_bdevs_operational": 2, 00:33:48.373 "base_bdevs_list": [ 00:33:48.373 { 00:33:48.373 "name": "spare", 00:33:48.373 "uuid": "1ace353c-0842-53c3-844a-01c6b48d4017", 00:33:48.373 "is_configured": true, 00:33:48.373 "data_offset": 256, 00:33:48.373 "data_size": 7936 00:33:48.373 }, 00:33:48.373 { 00:33:48.373 "name": "BaseBdev2", 00:33:48.373 "uuid": "edbbb2c4-8227-5220-8857-28b615ccf27e", 00:33:48.373 "is_configured": true, 00:33:48.373 "data_offset": 256, 00:33:48.373 "data_size": 7936 00:33:48.373 } 00:33:48.373 ] 00:33:48.373 }' 00:33:48.373 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:48.631 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:48.631 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:48.631 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:48.631 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:48.631 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.631 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:48.631 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:33:48.631 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.631 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:33:48.631 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:33:48.631 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.631 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:48.631 [2024-11-05 16:03:20.878755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:48.631 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.631 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:48.631 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:48.631 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:48.631 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:48.631 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:48.631 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:48.631 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:48.631 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:48.631 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:48.631 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:48.631 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:48.631 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:48.631 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.631 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:48.631 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.631 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:48.631 "name": "raid_bdev1", 00:33:48.631 "uuid": "0ccaa07a-1825-4102-9ea1-c010a3a846df", 00:33:48.631 "strip_size_kb": 0, 00:33:48.631 "state": "online", 00:33:48.631 "raid_level": "raid1", 00:33:48.631 "superblock": true, 00:33:48.631 "num_base_bdevs": 2, 00:33:48.631 "num_base_bdevs_discovered": 1, 00:33:48.631 "num_base_bdevs_operational": 1, 00:33:48.631 "base_bdevs_list": [ 00:33:48.631 { 00:33:48.631 "name": null, 00:33:48.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:48.631 "is_configured": false, 00:33:48.631 "data_offset": 0, 00:33:48.631 "data_size": 7936 00:33:48.631 }, 00:33:48.631 { 00:33:48.631 "name": "BaseBdev2", 00:33:48.631 "uuid": "edbbb2c4-8227-5220-8857-28b615ccf27e", 00:33:48.631 "is_configured": true, 00:33:48.631 "data_offset": 256, 00:33:48.631 "data_size": 7936 00:33:48.631 } 00:33:48.631 ] 00:33:48.631 }' 00:33:48.631 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:48.631 16:03:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:48.889 16:03:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:33:48.889 16:03:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.889 16:03:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:48.889 [2024-11-05 16:03:21.210857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:48.889 [2024-11-05 16:03:21.210991] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:33:48.889 [2024-11-05 16:03:21.211003] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:33:48.889 [2024-11-05 16:03:21.211033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:48.889 [2024-11-05 16:03:21.219510] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:33:48.889 16:03:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.889 16:03:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:33:48.889 [2024-11-05 16:03:21.220973] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:49.822 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:49.822 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:49.822 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:49.822 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:49.822 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:49.822 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:49.822 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:49.822 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.822 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:50.080 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.080 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:50.080 "name": "raid_bdev1", 00:33:50.080 "uuid": "0ccaa07a-1825-4102-9ea1-c010a3a846df", 00:33:50.080 "strip_size_kb": 0, 00:33:50.080 "state": "online", 00:33:50.080 "raid_level": "raid1", 00:33:50.080 "superblock": true, 00:33:50.080 "num_base_bdevs": 2, 00:33:50.080 "num_base_bdevs_discovered": 2, 00:33:50.080 "num_base_bdevs_operational": 2, 00:33:50.080 "process": { 00:33:50.080 "type": "rebuild", 00:33:50.080 "target": "spare", 00:33:50.080 "progress": { 00:33:50.080 "blocks": 2560, 00:33:50.080 "percent": 32 00:33:50.080 } 00:33:50.080 }, 00:33:50.080 "base_bdevs_list": [ 00:33:50.080 { 00:33:50.080 "name": "spare", 00:33:50.080 "uuid": "1ace353c-0842-53c3-844a-01c6b48d4017", 00:33:50.080 "is_configured": true, 00:33:50.080 "data_offset": 256, 00:33:50.080 "data_size": 7936 00:33:50.080 }, 00:33:50.080 { 00:33:50.080 "name": "BaseBdev2", 00:33:50.080 "uuid": "edbbb2c4-8227-5220-8857-28b615ccf27e", 00:33:50.080 "is_configured": true, 00:33:50.080 "data_offset": 256, 00:33:50.080 "data_size": 7936 00:33:50.080 } 00:33:50.080 ] 00:33:50.080 }' 00:33:50.080 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:50.080 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:50.080 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:50.080 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:50.080 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:33:50.080 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.080 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:50.080 [2024-11-05 16:03:22.335319] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:50.080 [2024-11-05 16:03:22.425700] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:50.080 [2024-11-05 16:03:22.425751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:50.080 [2024-11-05 16:03:22.425763] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:50.080 [2024-11-05 16:03:22.425770] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:50.080 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.080 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:50.080 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:50.080 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:50.080 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:50.080 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:50.080 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:50.080 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:50.080 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:50.080 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:50.080 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:50.080 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:50.080 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:50.080 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.080 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:50.080 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.080 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:50.080 "name": "raid_bdev1", 00:33:50.080 "uuid": "0ccaa07a-1825-4102-9ea1-c010a3a846df", 00:33:50.080 "strip_size_kb": 0, 00:33:50.080 "state": "online", 00:33:50.080 "raid_level": "raid1", 00:33:50.080 "superblock": true, 00:33:50.080 "num_base_bdevs": 2, 00:33:50.080 "num_base_bdevs_discovered": 1, 00:33:50.080 "num_base_bdevs_operational": 1, 00:33:50.080 "base_bdevs_list": [ 00:33:50.080 { 00:33:50.080 "name": null, 00:33:50.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:50.080 "is_configured": false, 00:33:50.080 "data_offset": 0, 00:33:50.080 "data_size": 7936 00:33:50.080 }, 00:33:50.080 { 00:33:50.080 "name": "BaseBdev2", 00:33:50.080 "uuid": "edbbb2c4-8227-5220-8857-28b615ccf27e", 00:33:50.080 "is_configured": true, 00:33:50.080 "data_offset": 256, 00:33:50.080 "data_size": 7936 00:33:50.080 } 00:33:50.081 ] 00:33:50.081 }' 00:33:50.081 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:50.081 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:50.339 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:33:50.339 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.339 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:50.597 [2024-11-05 16:03:22.756292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:50.597 [2024-11-05 16:03:22.756341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:50.597 [2024-11-05 16:03:22.756360] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:33:50.597 [2024-11-05 16:03:22.756369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:50.597 [2024-11-05 16:03:22.756518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:50.597 [2024-11-05 16:03:22.756529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:50.597 [2024-11-05 16:03:22.756569] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:33:50.597 [2024-11-05 16:03:22.756579] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:33:50.597 [2024-11-05 16:03:22.756588] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:33:50.597 [2024-11-05 16:03:22.756605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:50.597 [2024-11-05 16:03:22.765107] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:33:50.597 spare 00:33:50.597 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.597 16:03:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:33:50.597 [2024-11-05 16:03:22.766698] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:51.529 16:03:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:51.529 16:03:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:51.529 16:03:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:51.529 16:03:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:51.529 16:03:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:51.529 16:03:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:51.529 16:03:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:51.529 16:03:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.529 16:03:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:51.529 16:03:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.529 16:03:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:51.529 "name": "raid_bdev1", 00:33:51.529 "uuid": "0ccaa07a-1825-4102-9ea1-c010a3a846df", 00:33:51.529 "strip_size_kb": 0, 00:33:51.529 "state": "online", 00:33:51.529 "raid_level": "raid1", 00:33:51.529 "superblock": true, 00:33:51.529 "num_base_bdevs": 2, 00:33:51.529 "num_base_bdevs_discovered": 2, 00:33:51.529 "num_base_bdevs_operational": 2, 00:33:51.529 "process": { 00:33:51.529 "type": "rebuild", 00:33:51.529 "target": "spare", 00:33:51.529 "progress": { 00:33:51.529 "blocks": 2560, 00:33:51.529 "percent": 32 00:33:51.529 } 00:33:51.529 }, 00:33:51.529 "base_bdevs_list": [ 00:33:51.529 { 00:33:51.529 "name": "spare", 00:33:51.529 "uuid": "1ace353c-0842-53c3-844a-01c6b48d4017", 00:33:51.529 "is_configured": true, 00:33:51.529 "data_offset": 256, 00:33:51.529 "data_size": 7936 00:33:51.529 }, 00:33:51.529 { 00:33:51.529 "name": "BaseBdev2", 00:33:51.529 "uuid": "edbbb2c4-8227-5220-8857-28b615ccf27e", 00:33:51.529 "is_configured": true, 00:33:51.529 "data_offset": 256, 00:33:51.529 "data_size": 7936 00:33:51.529 } 00:33:51.529 ] 00:33:51.529 }' 00:33:51.529 16:03:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:51.529 16:03:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:51.529 16:03:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:51.529 16:03:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:51.529 16:03:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:33:51.529 16:03:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.529 16:03:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:51.529 [2024-11-05 16:03:23.864765] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:51.529 [2024-11-05 16:03:23.871010] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:51.529 [2024-11-05 16:03:23.871052] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:51.529 [2024-11-05 16:03:23.871064] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:51.529 [2024-11-05 16:03:23.871070] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:51.529 16:03:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.529 16:03:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:51.529 16:03:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:51.529 16:03:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:51.529 16:03:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:51.529 16:03:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:51.529 16:03:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:51.529 16:03:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:51.529 16:03:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:51.529 16:03:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:51.529 16:03:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:51.529 16:03:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:51.529 16:03:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.529 16:03:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:51.529 16:03:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:51.530 16:03:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.530 16:03:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:51.530 "name": "raid_bdev1", 00:33:51.530 "uuid": "0ccaa07a-1825-4102-9ea1-c010a3a846df", 00:33:51.530 "strip_size_kb": 0, 00:33:51.530 "state": "online", 00:33:51.530 "raid_level": "raid1", 00:33:51.530 "superblock": true, 00:33:51.530 "num_base_bdevs": 2, 00:33:51.530 "num_base_bdevs_discovered": 1, 00:33:51.530 "num_base_bdevs_operational": 1, 00:33:51.530 "base_bdevs_list": [ 00:33:51.530 { 00:33:51.530 "name": null, 00:33:51.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:51.530 "is_configured": false, 00:33:51.530 "data_offset": 0, 00:33:51.530 "data_size": 7936 00:33:51.530 }, 00:33:51.530 { 00:33:51.530 "name": "BaseBdev2", 00:33:51.530 "uuid": "edbbb2c4-8227-5220-8857-28b615ccf27e", 00:33:51.530 "is_configured": true, 00:33:51.530 "data_offset": 256, 00:33:51.530 "data_size": 7936 00:33:51.530 } 00:33:51.530 ] 00:33:51.530 }' 00:33:51.530 16:03:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:51.530 16:03:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:52.095 16:03:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:52.095 16:03:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:52.095 16:03:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:52.095 16:03:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:52.095 16:03:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:52.095 16:03:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:52.095 16:03:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:52.095 16:03:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.095 16:03:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:52.095 16:03:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.095 16:03:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:52.095 "name": "raid_bdev1", 00:33:52.095 "uuid": "0ccaa07a-1825-4102-9ea1-c010a3a846df", 00:33:52.095 "strip_size_kb": 0, 00:33:52.095 "state": "online", 00:33:52.095 "raid_level": "raid1", 00:33:52.095 "superblock": true, 00:33:52.095 "num_base_bdevs": 2, 00:33:52.095 "num_base_bdevs_discovered": 1, 00:33:52.095 "num_base_bdevs_operational": 1, 00:33:52.095 "base_bdevs_list": [ 00:33:52.095 { 00:33:52.095 "name": null, 00:33:52.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:52.095 "is_configured": false, 00:33:52.095 "data_offset": 0, 00:33:52.095 "data_size": 7936 00:33:52.095 }, 00:33:52.095 { 00:33:52.095 "name": "BaseBdev2", 00:33:52.095 "uuid": "edbbb2c4-8227-5220-8857-28b615ccf27e", 00:33:52.095 "is_configured": true, 00:33:52.095 "data_offset": 256, 00:33:52.095 "data_size": 7936 00:33:52.095 } 00:33:52.095 ] 00:33:52.095 }' 00:33:52.095 16:03:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:52.095 16:03:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:52.095 16:03:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:52.095 16:03:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:52.095 16:03:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:33:52.095 16:03:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.095 16:03:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:52.095 16:03:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.095 16:03:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:52.095 16:03:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.095 16:03:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:52.095 [2024-11-05 16:03:24.320587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:52.095 [2024-11-05 16:03:24.320707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:52.095 [2024-11-05 16:03:24.320729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:33:52.095 [2024-11-05 16:03:24.320737] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:52.095 [2024-11-05 16:03:24.320868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:52.095 [2024-11-05 16:03:24.320877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:52.095 [2024-11-05 16:03:24.320917] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:33:52.095 [2024-11-05 16:03:24.320926] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:33:52.095 [2024-11-05 16:03:24.320934] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:52.095 [2024-11-05 16:03:24.320942] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:33:52.095 BaseBdev1 00:33:52.095 16:03:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.095 16:03:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:33:53.132 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:53.132 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:53.132 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:53.132 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:53.132 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:53.132 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:53.132 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:53.132 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:53.132 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:53.132 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:53.132 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:53.132 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:53.132 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.132 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:53.132 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.132 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:53.132 "name": "raid_bdev1", 00:33:53.132 "uuid": "0ccaa07a-1825-4102-9ea1-c010a3a846df", 00:33:53.132 "strip_size_kb": 0, 00:33:53.132 "state": "online", 00:33:53.132 "raid_level": "raid1", 00:33:53.132 "superblock": true, 00:33:53.132 "num_base_bdevs": 2, 00:33:53.132 "num_base_bdevs_discovered": 1, 00:33:53.132 "num_base_bdevs_operational": 1, 00:33:53.132 "base_bdevs_list": [ 00:33:53.132 { 00:33:53.132 "name": null, 00:33:53.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:53.132 "is_configured": false, 00:33:53.132 "data_offset": 0, 00:33:53.132 "data_size": 7936 00:33:53.132 }, 00:33:53.132 { 00:33:53.132 "name": "BaseBdev2", 00:33:53.132 "uuid": "edbbb2c4-8227-5220-8857-28b615ccf27e", 00:33:53.132 "is_configured": true, 00:33:53.132 "data_offset": 256, 00:33:53.132 "data_size": 7936 00:33:53.132 } 00:33:53.132 ] 00:33:53.132 }' 00:33:53.132 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:53.132 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:53.391 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:53.391 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:53.391 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:53.391 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:53.391 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:53.391 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:53.391 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.391 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:53.391 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:53.391 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.391 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:53.391 "name": "raid_bdev1", 00:33:53.391 "uuid": "0ccaa07a-1825-4102-9ea1-c010a3a846df", 00:33:53.391 "strip_size_kb": 0, 00:33:53.391 "state": "online", 00:33:53.391 "raid_level": "raid1", 00:33:53.391 "superblock": true, 00:33:53.391 "num_base_bdevs": 2, 00:33:53.391 "num_base_bdevs_discovered": 1, 00:33:53.391 "num_base_bdevs_operational": 1, 00:33:53.391 "base_bdevs_list": [ 00:33:53.391 { 00:33:53.391 "name": null, 00:33:53.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:53.391 "is_configured": false, 00:33:53.391 "data_offset": 0, 00:33:53.391 "data_size": 7936 00:33:53.391 }, 00:33:53.391 { 00:33:53.391 "name": "BaseBdev2", 00:33:53.391 "uuid": "edbbb2c4-8227-5220-8857-28b615ccf27e", 00:33:53.391 "is_configured": true, 00:33:53.391 "data_offset": 256, 00:33:53.391 "data_size": 7936 00:33:53.391 } 00:33:53.391 ] 00:33:53.391 }' 00:33:53.391 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:53.391 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:53.391 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:53.391 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:53.391 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:53.391 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:33:53.391 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:53.391 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:53.391 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:53.391 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:53.391 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:53.391 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:53.391 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.391 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:53.391 [2024-11-05 16:03:25.760892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:53.391 [2024-11-05 16:03:25.761005] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:33:53.391 [2024-11-05 16:03:25.761017] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:53.391 request: 00:33:53.391 { 00:33:53.391 "base_bdev": "BaseBdev1", 00:33:53.391 "raid_bdev": "raid_bdev1", 00:33:53.391 "method": "bdev_raid_add_base_bdev", 00:33:53.391 "req_id": 1 00:33:53.391 } 00:33:53.391 Got JSON-RPC error response 00:33:53.391 response: 00:33:53.391 { 00:33:53.391 "code": -22, 00:33:53.391 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:33:53.391 } 00:33:53.391 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:53.391 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:33:53.391 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:53.391 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:53.391 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:53.391 16:03:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:33:54.764 16:03:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:54.764 16:03:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:54.764 16:03:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:54.764 16:03:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:54.764 16:03:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:54.764 16:03:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:54.764 16:03:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:54.765 16:03:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:54.765 16:03:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:54.765 16:03:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:54.765 16:03:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:54.765 16:03:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:54.765 16:03:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.765 16:03:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:54.765 16:03:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.765 16:03:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:54.765 "name": "raid_bdev1", 00:33:54.765 "uuid": "0ccaa07a-1825-4102-9ea1-c010a3a846df", 00:33:54.765 "strip_size_kb": 0, 00:33:54.765 "state": "online", 00:33:54.765 "raid_level": "raid1", 00:33:54.765 "superblock": true, 00:33:54.765 "num_base_bdevs": 2, 00:33:54.765 "num_base_bdevs_discovered": 1, 00:33:54.765 "num_base_bdevs_operational": 1, 00:33:54.765 "base_bdevs_list": [ 00:33:54.765 { 00:33:54.765 "name": null, 00:33:54.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:54.765 "is_configured": false, 00:33:54.765 "data_offset": 0, 00:33:54.765 "data_size": 7936 00:33:54.765 }, 00:33:54.765 { 00:33:54.765 "name": "BaseBdev2", 00:33:54.765 "uuid": "edbbb2c4-8227-5220-8857-28b615ccf27e", 00:33:54.765 "is_configured": true, 00:33:54.765 "data_offset": 256, 00:33:54.765 "data_size": 7936 00:33:54.765 } 00:33:54.765 ] 00:33:54.765 }' 00:33:54.765 16:03:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:54.765 16:03:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:54.765 16:03:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:54.765 16:03:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:54.765 16:03:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:54.765 16:03:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:54.765 16:03:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:54.765 16:03:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:54.765 16:03:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:54.765 16:03:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.765 16:03:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:54.765 16:03:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.765 16:03:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:54.765 "name": "raid_bdev1", 00:33:54.765 "uuid": "0ccaa07a-1825-4102-9ea1-c010a3a846df", 00:33:54.765 "strip_size_kb": 0, 00:33:54.765 "state": "online", 00:33:54.765 "raid_level": "raid1", 00:33:54.765 "superblock": true, 00:33:54.765 "num_base_bdevs": 2, 00:33:54.765 "num_base_bdevs_discovered": 1, 00:33:54.765 "num_base_bdevs_operational": 1, 00:33:54.765 "base_bdevs_list": [ 00:33:54.765 { 00:33:54.765 "name": null, 00:33:54.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:54.765 "is_configured": false, 00:33:54.765 "data_offset": 0, 00:33:54.765 "data_size": 7936 00:33:54.765 }, 00:33:54.765 { 00:33:54.765 "name": "BaseBdev2", 00:33:54.765 "uuid": "edbbb2c4-8227-5220-8857-28b615ccf27e", 00:33:54.765 "is_configured": true, 00:33:54.765 "data_offset": 256, 00:33:54.765 "data_size": 7936 00:33:54.765 } 00:33:54.765 ] 00:33:54.765 }' 00:33:54.765 16:03:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:54.765 16:03:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:54.765 16:03:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:55.026 16:03:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:55.026 16:03:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 86128 00:33:55.026 16:03:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 86128 ']' 00:33:55.026 16:03:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 86128 00:33:55.026 16:03:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:33:55.026 16:03:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:55.026 16:03:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86128 00:33:55.026 16:03:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:55.026 16:03:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:55.026 16:03:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86128' 00:33:55.026 killing process with pid 86128 00:33:55.026 16:03:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 86128 00:33:55.026 Received shutdown signal, test time was about 60.000000 seconds 00:33:55.026 00:33:55.026 Latency(us) 00:33:55.026 [2024-11-05T16:03:27.441Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:55.026 [2024-11-05T16:03:27.441Z] =================================================================================================================== 00:33:55.026 [2024-11-05T16:03:27.441Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:55.026 [2024-11-05 16:03:27.222629] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:55.026 16:03:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 86128 00:33:55.026 [2024-11-05 16:03:27.222718] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:55.026 [2024-11-05 16:03:27.222755] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:55.026 [2024-11-05 16:03:27.222763] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:33:55.026 [2024-11-05 16:03:27.366197] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:55.592 16:03:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:33:55.592 00:33:55.592 real 0m14.821s 00:33:55.592 user 0m18.843s 00:33:55.592 sys 0m1.074s 00:33:55.592 16:03:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:55.592 ************************************ 00:33:55.592 END TEST raid_rebuild_test_sb_md_interleaved 00:33:55.592 16:03:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:55.592 ************************************ 00:33:55.592 16:03:27 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:33:55.592 16:03:27 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:33:55.592 16:03:27 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 86128 ']' 00:33:55.592 16:03:27 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 86128 00:33:55.592 16:03:27 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:33:55.592 ************************************ 00:33:55.592 00:33:55.592 real 9m13.198s 00:33:55.592 user 12m22.728s 00:33:55.592 sys 1m14.729s 00:33:55.592 16:03:27 bdev_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:55.592 16:03:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:55.592 END TEST bdev_raid 00:33:55.592 ************************************ 00:33:55.592 16:03:28 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:33:55.592 16:03:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:33:55.592 16:03:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:55.592 16:03:28 -- common/autotest_common.sh@10 -- # set +x 00:33:55.850 ************************************ 00:33:55.850 START TEST spdkcli_raid 00:33:55.850 ************************************ 00:33:55.850 16:03:28 spdkcli_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:33:55.850 * Looking for test storage... 00:33:55.850 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:33:55.850 16:03:28 spdkcli_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:55.850 16:03:28 spdkcli_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:55.850 16:03:28 spdkcli_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:33:55.850 16:03:28 spdkcli_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:55.850 16:03:28 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:55.850 16:03:28 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:55.850 16:03:28 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:55.850 16:03:28 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:33:55.850 16:03:28 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:33:55.850 16:03:28 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:33:55.850 16:03:28 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:33:55.851 16:03:28 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:33:55.851 16:03:28 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:33:55.851 16:03:28 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:33:55.851 16:03:28 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:55.851 16:03:28 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:33:55.851 16:03:28 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:33:55.851 16:03:28 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:55.851 16:03:28 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:55.851 16:03:28 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:33:55.851 16:03:28 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:33:55.851 16:03:28 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:55.851 16:03:28 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:33:55.851 16:03:28 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:33:55.851 16:03:28 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:33:55.851 16:03:28 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:33:55.851 16:03:28 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:55.851 16:03:28 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:33:55.851 16:03:28 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:33:55.851 16:03:28 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:55.851 16:03:28 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:55.851 16:03:28 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:33:55.851 16:03:28 spdkcli_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:55.851 16:03:28 spdkcli_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:55.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:55.851 --rc genhtml_branch_coverage=1 00:33:55.851 --rc genhtml_function_coverage=1 00:33:55.851 --rc genhtml_legend=1 00:33:55.851 --rc geninfo_all_blocks=1 00:33:55.851 --rc geninfo_unexecuted_blocks=1 00:33:55.851 00:33:55.851 ' 00:33:55.851 16:03:28 spdkcli_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:55.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:55.851 --rc genhtml_branch_coverage=1 00:33:55.851 --rc genhtml_function_coverage=1 00:33:55.851 --rc genhtml_legend=1 00:33:55.851 --rc geninfo_all_blocks=1 00:33:55.851 --rc geninfo_unexecuted_blocks=1 00:33:55.851 00:33:55.851 ' 00:33:55.851 16:03:28 spdkcli_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:55.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:55.851 --rc genhtml_branch_coverage=1 00:33:55.851 --rc genhtml_function_coverage=1 00:33:55.851 --rc genhtml_legend=1 00:33:55.851 --rc geninfo_all_blocks=1 00:33:55.851 --rc geninfo_unexecuted_blocks=1 00:33:55.851 00:33:55.851 ' 00:33:55.851 16:03:28 spdkcli_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:55.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:55.851 --rc genhtml_branch_coverage=1 00:33:55.851 --rc genhtml_function_coverage=1 00:33:55.851 --rc genhtml_legend=1 00:33:55.851 --rc geninfo_all_blocks=1 00:33:55.851 --rc geninfo_unexecuted_blocks=1 00:33:55.851 00:33:55.851 ' 00:33:55.851 16:03:28 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:33:55.851 16:03:28 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:33:55.851 16:03:28 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:33:55.851 16:03:28 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:33:55.851 16:03:28 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:33:55.851 16:03:28 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:33:55.851 16:03:28 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:33:55.851 16:03:28 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:33:55.851 16:03:28 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:33:55.851 16:03:28 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:33:55.851 16:03:28 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:33:55.851 16:03:28 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:33:55.851 16:03:28 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:33:55.851 16:03:28 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:33:55.851 16:03:28 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:33:55.851 16:03:28 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:33:55.851 16:03:28 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:33:55.851 16:03:28 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:33:55.851 16:03:28 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:33:55.851 16:03:28 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:33:55.851 16:03:28 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:33:55.851 16:03:28 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:33:55.851 16:03:28 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:33:55.851 16:03:28 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:33:55.851 16:03:28 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:33:55.851 16:03:28 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:33:55.851 16:03:28 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:33:55.851 16:03:28 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:33:55.851 16:03:28 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:33:55.851 16:03:28 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:33:55.851 16:03:28 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:33:55.851 16:03:28 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:33:55.851 16:03:28 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:33:55.851 16:03:28 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:55.851 16:03:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:55.851 16:03:28 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:33:55.851 16:03:28 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=86776 00:33:55.851 16:03:28 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 86776 00:33:55.851 16:03:28 spdkcli_raid -- common/autotest_common.sh@833 -- # '[' -z 86776 ']' 00:33:55.851 16:03:28 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:33:55.851 16:03:28 spdkcli_raid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:55.851 16:03:28 spdkcli_raid -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:55.851 16:03:28 spdkcli_raid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:55.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:55.851 16:03:28 spdkcli_raid -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:55.851 16:03:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:55.851 [2024-11-05 16:03:28.247352] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:33:55.851 [2024-11-05 16:03:28.247619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86776 ] 00:33:56.109 [2024-11-05 16:03:28.405481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:56.109 [2024-11-05 16:03:28.508559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:56.109 [2024-11-05 16:03:28.508720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:56.675 16:03:29 spdkcli_raid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:56.675 16:03:29 spdkcli_raid -- common/autotest_common.sh@866 -- # return 0 00:33:56.675 16:03:29 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:33:56.675 16:03:29 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:56.675 16:03:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:56.933 16:03:29 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:33:56.933 16:03:29 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:56.933 16:03:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:56.933 16:03:29 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:56.933 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:56.933 ' 00:33:58.308 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:33:58.308 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:33:58.308 16:03:30 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:33:58.308 16:03:30 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:58.308 16:03:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:58.308 16:03:30 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:33:58.308 16:03:30 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:58.308 16:03:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:58.308 16:03:30 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:33:58.308 ' 00:33:59.692 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:33:59.692 16:03:31 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:33:59.693 16:03:31 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:59.693 16:03:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:59.693 16:03:31 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:33:59.693 16:03:31 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:59.693 16:03:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:59.693 16:03:31 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:33:59.693 16:03:31 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:33:59.951 16:03:32 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:33:59.951 16:03:32 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:33:59.951 16:03:32 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:33:59.951 16:03:32 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:59.951 16:03:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:34:00.210 16:03:32 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:34:00.210 16:03:32 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:00.210 16:03:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:34:00.210 16:03:32 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:34:00.210 ' 00:34:01.158 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:34:01.158 16:03:33 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:34:01.158 16:03:33 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:01.158 16:03:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:34:01.158 16:03:33 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:34:01.158 16:03:33 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:01.158 16:03:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:34:01.158 16:03:33 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:34:01.158 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:34:01.158 ' 00:34:02.538 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:34:02.538 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:34:02.538 16:03:34 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:34:02.538 16:03:34 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:02.538 16:03:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:34:02.538 16:03:34 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 86776 00:34:02.538 16:03:34 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 86776 ']' 00:34:02.538 16:03:34 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 86776 00:34:02.538 16:03:34 spdkcli_raid -- common/autotest_common.sh@957 -- # uname 00:34:02.539 16:03:34 spdkcli_raid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:02.539 16:03:34 spdkcli_raid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86776 00:34:02.539 16:03:34 spdkcli_raid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:02.539 16:03:34 spdkcli_raid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:02.539 16:03:34 spdkcli_raid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86776' 00:34:02.539 killing process with pid 86776 00:34:02.539 16:03:34 spdkcli_raid -- common/autotest_common.sh@971 -- # kill 86776 00:34:02.539 16:03:34 spdkcli_raid -- common/autotest_common.sh@976 -- # wait 86776 00:34:03.911 16:03:36 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:34:03.911 16:03:36 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 86776 ']' 00:34:03.911 16:03:36 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 86776 00:34:03.911 16:03:36 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 86776 ']' 00:34:03.911 16:03:36 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 86776 00:34:03.911 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (86776) - No such process 00:34:03.911 16:03:36 spdkcli_raid -- common/autotest_common.sh@979 -- # echo 'Process with pid 86776 is not found' 00:34:03.911 Process with pid 86776 is not found 00:34:03.911 16:03:36 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:34:03.911 16:03:36 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:03.911 16:03:36 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:03.911 16:03:36 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:03.911 00:34:03.911 real 0m8.035s 00:34:03.911 user 0m16.657s 00:34:03.911 sys 0m0.731s 00:34:03.911 ************************************ 00:34:03.911 END TEST spdkcli_raid 00:34:03.911 ************************************ 00:34:03.911 16:03:36 spdkcli_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:03.911 16:03:36 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:34:03.911 16:03:36 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:34:03.911 16:03:36 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:34:03.911 16:03:36 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:03.911 16:03:36 -- common/autotest_common.sh@10 -- # set +x 00:34:03.911 ************************************ 00:34:03.911 START TEST blockdev_raid5f 00:34:03.911 ************************************ 00:34:03.911 16:03:36 blockdev_raid5f -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:34:03.911 * Looking for test storage... 00:34:03.911 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:34:03.911 16:03:36 blockdev_raid5f -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:03.911 16:03:36 blockdev_raid5f -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:03.911 16:03:36 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lcov --version 00:34:03.911 16:03:36 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:03.911 16:03:36 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:03.911 16:03:36 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:03.911 16:03:36 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:03.911 16:03:36 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:34:03.911 16:03:36 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:34:03.911 16:03:36 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:34:03.911 16:03:36 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:34:03.911 16:03:36 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:34:03.911 16:03:36 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:34:03.911 16:03:36 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:34:03.912 16:03:36 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:03.912 16:03:36 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:34:03.912 16:03:36 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:34:03.912 16:03:36 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:03.912 16:03:36 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:03.912 16:03:36 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:34:03.912 16:03:36 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:34:03.912 16:03:36 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:03.912 16:03:36 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:34:03.912 16:03:36 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:34:03.912 16:03:36 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:34:03.912 16:03:36 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:34:03.912 16:03:36 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:03.912 16:03:36 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:34:03.912 16:03:36 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:34:03.912 16:03:36 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:03.912 16:03:36 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:03.912 16:03:36 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:34:03.912 16:03:36 blockdev_raid5f -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:03.912 16:03:36 blockdev_raid5f -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:03.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.912 --rc genhtml_branch_coverage=1 00:34:03.912 --rc genhtml_function_coverage=1 00:34:03.912 --rc genhtml_legend=1 00:34:03.912 --rc geninfo_all_blocks=1 00:34:03.912 --rc geninfo_unexecuted_blocks=1 00:34:03.912 00:34:03.912 ' 00:34:03.912 16:03:36 blockdev_raid5f -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:03.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.912 --rc genhtml_branch_coverage=1 00:34:03.912 --rc genhtml_function_coverage=1 00:34:03.912 --rc genhtml_legend=1 00:34:03.912 --rc geninfo_all_blocks=1 00:34:03.912 --rc geninfo_unexecuted_blocks=1 00:34:03.912 00:34:03.912 ' 00:34:03.912 16:03:36 blockdev_raid5f -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:03.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.912 --rc genhtml_branch_coverage=1 00:34:03.912 --rc genhtml_function_coverage=1 00:34:03.912 --rc genhtml_legend=1 00:34:03.912 --rc geninfo_all_blocks=1 00:34:03.912 --rc geninfo_unexecuted_blocks=1 00:34:03.912 00:34:03.912 ' 00:34:03.912 16:03:36 blockdev_raid5f -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:03.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.912 --rc genhtml_branch_coverage=1 00:34:03.912 --rc genhtml_function_coverage=1 00:34:03.912 --rc genhtml_legend=1 00:34:03.912 --rc geninfo_all_blocks=1 00:34:03.912 --rc geninfo_unexecuted_blocks=1 00:34:03.912 00:34:03.912 ' 00:34:03.912 16:03:36 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:34:03.912 16:03:36 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:34:03.912 16:03:36 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:34:03.912 16:03:36 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:34:03.912 16:03:36 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:34:03.912 16:03:36 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:34:03.912 16:03:36 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:34:03.912 16:03:36 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:34:03.912 16:03:36 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:34:03.912 16:03:36 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:34:03.912 16:03:36 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:34:03.912 16:03:36 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:34:03.912 16:03:36 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:34:03.912 16:03:36 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:34:03.912 16:03:36 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:34:03.912 16:03:36 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:34:03.912 16:03:36 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:34:03.912 16:03:36 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:34:03.912 16:03:36 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:34:03.912 16:03:36 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:34:03.912 16:03:36 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:34:03.912 16:03:36 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:34:03.912 16:03:36 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:34:03.912 16:03:36 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:34:03.912 16:03:36 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=87034 00:34:03.912 16:03:36 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:34:03.912 16:03:36 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 87034 00:34:03.912 16:03:36 blockdev_raid5f -- common/autotest_common.sh@833 -- # '[' -z 87034 ']' 00:34:03.912 16:03:36 blockdev_raid5f -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:03.912 16:03:36 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:34:03.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:03.912 16:03:36 blockdev_raid5f -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:03.912 16:03:36 blockdev_raid5f -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:03.912 16:03:36 blockdev_raid5f -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:03.912 16:03:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:03.912 [2024-11-05 16:03:36.317309] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:34:03.912 [2024-11-05 16:03:36.317585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87034 ] 00:34:04.170 [2024-11-05 16:03:36.473910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:04.170 [2024-11-05 16:03:36.572907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:04.736 16:03:37 blockdev_raid5f -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:04.736 16:03:37 blockdev_raid5f -- common/autotest_common.sh@866 -- # return 0 00:34:04.736 16:03:37 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:34:04.736 16:03:37 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:34:04.736 16:03:37 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:34:04.736 16:03:37 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.736 16:03:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:04.995 Malloc0 00:34:04.995 Malloc1 00:34:04.995 Malloc2 00:34:04.995 16:03:37 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.995 16:03:37 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:34:04.995 16:03:37 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.995 16:03:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:04.995 16:03:37 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.995 16:03:37 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:34:04.995 16:03:37 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:34:04.995 16:03:37 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.995 16:03:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:04.995 16:03:37 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.995 16:03:37 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:34:04.995 16:03:37 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.995 16:03:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:04.995 16:03:37 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.995 16:03:37 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:34:04.995 16:03:37 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.995 16:03:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:04.995 16:03:37 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.995 16:03:37 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:34:04.995 16:03:37 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:34:04.995 16:03:37 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:34:04.995 16:03:37 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.995 16:03:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:04.995 16:03:37 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.995 16:03:37 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:34:04.995 16:03:37 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:34:04.995 16:03:37 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "f0f89fa3-7aec-435b-818a-13ed8d8cbffd"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "f0f89fa3-7aec-435b-818a-13ed8d8cbffd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "f0f89fa3-7aec-435b-818a-13ed8d8cbffd",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "f5e7548c-5e9e-4a71-8302-f6d8cb31171c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "733ad2c6-b7cb-41aa-bd13-bb337ec26127",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "8dad4b3e-b2d0-4889-af0c-fbdee0e5b94d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:34:04.995 16:03:37 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:34:04.995 16:03:37 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:34:04.995 16:03:37 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:34:04.995 16:03:37 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 87034 00:34:04.995 16:03:37 blockdev_raid5f -- common/autotest_common.sh@952 -- # '[' -z 87034 ']' 00:34:04.995 16:03:37 blockdev_raid5f -- common/autotest_common.sh@956 -- # kill -0 87034 00:34:04.995 16:03:37 blockdev_raid5f -- common/autotest_common.sh@957 -- # uname 00:34:04.995 16:03:37 blockdev_raid5f -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:04.995 16:03:37 blockdev_raid5f -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87034 00:34:04.995 killing process with pid 87034 00:34:04.995 16:03:37 blockdev_raid5f -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:04.995 16:03:37 blockdev_raid5f -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:04.995 16:03:37 blockdev_raid5f -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87034' 00:34:04.995 16:03:37 blockdev_raid5f -- common/autotest_common.sh@971 -- # kill 87034 00:34:04.995 16:03:37 blockdev_raid5f -- common/autotest_common.sh@976 -- # wait 87034 00:34:06.917 16:03:39 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:06.917 16:03:39 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:34:06.917 16:03:39 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:34:06.917 16:03:39 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:06.917 16:03:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:06.917 ************************************ 00:34:06.917 START TEST bdev_hello_world 00:34:06.917 ************************************ 00:34:06.917 16:03:39 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:34:06.917 [2024-11-05 16:03:39.127720] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:34:06.918 [2024-11-05 16:03:39.127836] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87090 ] 00:34:06.918 [2024-11-05 16:03:39.286812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:07.176 [2024-11-05 16:03:39.383528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:07.487 [2024-11-05 16:03:39.767349] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:34:07.487 [2024-11-05 16:03:39.767396] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:34:07.487 [2024-11-05 16:03:39.767411] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:34:07.488 [2024-11-05 16:03:39.767873] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:34:07.488 [2024-11-05 16:03:39.767993] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:34:07.488 [2024-11-05 16:03:39.768007] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:34:07.488 [2024-11-05 16:03:39.768056] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:34:07.488 00:34:07.488 [2024-11-05 16:03:39.768072] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:34:08.425 00:34:08.425 real 0m1.575s 00:34:08.425 user 0m1.277s 00:34:08.425 sys 0m0.176s 00:34:08.425 ************************************ 00:34:08.425 END TEST bdev_hello_world 00:34:08.425 ************************************ 00:34:08.425 16:03:40 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:08.425 16:03:40 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:34:08.425 16:03:40 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:34:08.425 16:03:40 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:34:08.425 16:03:40 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:08.425 16:03:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:08.425 ************************************ 00:34:08.425 START TEST bdev_bounds 00:34:08.426 ************************************ 00:34:08.426 Process bdevio pid: 87127 00:34:08.426 16:03:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:34:08.426 16:03:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=87127 00:34:08.426 16:03:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:34:08.426 16:03:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 87127' 00:34:08.426 16:03:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 87127 00:34:08.426 16:03:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 87127 ']' 00:34:08.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:08.426 16:03:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:08.426 16:03:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:08.426 16:03:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:34:08.426 16:03:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:08.426 16:03:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:08.426 16:03:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:34:08.426 [2024-11-05 16:03:40.760101] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:34:08.426 [2024-11-05 16:03:40.760215] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87127 ] 00:34:08.684 [2024-11-05 16:03:40.918763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:08.684 [2024-11-05 16:03:41.017003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:08.684 [2024-11-05 16:03:41.017599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:08.684 [2024-11-05 16:03:41.017712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:09.251 16:03:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:09.251 16:03:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:34:09.251 16:03:41 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:34:09.509 I/O targets: 00:34:09.509 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:34:09.509 00:34:09.509 00:34:09.509 CUnit - A unit testing framework for C - Version 2.1-3 00:34:09.509 http://cunit.sourceforge.net/ 00:34:09.509 00:34:09.509 00:34:09.509 Suite: bdevio tests on: raid5f 00:34:09.509 Test: blockdev write read block ...passed 00:34:09.509 Test: blockdev write zeroes read block ...passed 00:34:09.509 Test: blockdev write zeroes read no split ...passed 00:34:09.509 Test: blockdev write zeroes read split ...passed 00:34:09.509 Test: blockdev write zeroes read split partial ...passed 00:34:09.509 Test: blockdev reset ...passed 00:34:09.509 Test: blockdev write read 8 blocks ...passed 00:34:09.509 Test: blockdev write read size > 128k ...passed 00:34:09.509 Test: blockdev write read invalid size ...passed 00:34:09.509 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:09.509 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:09.509 Test: blockdev write read max offset ...passed 00:34:09.509 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:09.510 Test: blockdev writev readv 8 blocks ...passed 00:34:09.510 Test: blockdev writev readv 30 x 1block ...passed 00:34:09.510 Test: blockdev writev readv block ...passed 00:34:09.510 Test: blockdev writev readv size > 128k ...passed 00:34:09.510 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:09.510 Test: blockdev comparev and writev ...passed 00:34:09.510 Test: blockdev nvme passthru rw ...passed 00:34:09.510 Test: blockdev nvme passthru vendor specific ...passed 00:34:09.510 Test: blockdev nvme admin passthru ...passed 00:34:09.510 Test: blockdev copy ...passed 00:34:09.510 00:34:09.510 Run Summary: Type Total Ran Passed Failed Inactive 00:34:09.510 suites 1 1 n/a 0 0 00:34:09.510 tests 23 23 23 0 0 00:34:09.510 asserts 130 130 130 0 n/a 00:34:09.510 00:34:09.510 Elapsed time = 0.440 seconds 00:34:09.510 0 00:34:09.510 16:03:41 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 87127 00:34:09.510 16:03:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 87127 ']' 00:34:09.510 16:03:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 87127 00:34:09.510 16:03:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:34:09.510 16:03:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:09.510 16:03:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87127 00:34:09.768 killing process with pid 87127 00:34:09.768 16:03:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:09.768 16:03:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:09.768 16:03:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87127' 00:34:09.768 16:03:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@971 -- # kill 87127 00:34:09.768 16:03:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@976 -- # wait 87127 00:34:10.703 16:03:42 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:34:10.703 00:34:10.703 real 0m2.103s 00:34:10.703 user 0m5.269s 00:34:10.703 sys 0m0.272s 00:34:10.703 16:03:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:10.703 16:03:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:34:10.703 ************************************ 00:34:10.703 END TEST bdev_bounds 00:34:10.703 ************************************ 00:34:10.703 16:03:42 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:34:10.703 16:03:42 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:34:10.703 16:03:42 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:10.703 16:03:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:10.703 ************************************ 00:34:10.703 START TEST bdev_nbd 00:34:10.703 ************************************ 00:34:10.703 16:03:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:34:10.703 16:03:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:34:10.703 16:03:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:34:10.703 16:03:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:10.703 16:03:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:34:10.703 16:03:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:34:10.703 16:03:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:34:10.703 16:03:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:34:10.703 16:03:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:34:10.703 16:03:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:34:10.703 16:03:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:34:10.703 16:03:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:34:10.703 16:03:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:34:10.703 16:03:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:34:10.703 16:03:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:34:10.703 16:03:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:34:10.703 16:03:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=87181 00:34:10.703 16:03:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:34:10.703 16:03:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:34:10.703 16:03:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 87181 /var/tmp/spdk-nbd.sock 00:34:10.703 16:03:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 87181 ']' 00:34:10.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:34:10.703 16:03:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:34:10.703 16:03:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:10.703 16:03:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:34:10.703 16:03:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:10.703 16:03:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:34:10.703 [2024-11-05 16:03:42.923351] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:34:10.703 [2024-11-05 16:03:42.924058] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:10.703 [2024-11-05 16:03:43.080388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:10.962 [2024-11-05 16:03:43.158821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:11.529 16:03:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:11.529 16:03:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:34:11.529 16:03:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:34:11.529 16:03:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:11.529 16:03:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:34:11.529 16:03:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:34:11.529 16:03:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:34:11.529 16:03:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:11.529 16:03:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:34:11.529 16:03:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:34:11.529 16:03:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:34:11.529 16:03:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:34:11.529 16:03:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:34:11.529 16:03:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:34:11.529 16:03:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:34:11.787 16:03:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:34:11.787 16:03:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:34:11.787 16:03:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:34:11.787 16:03:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:34:11.788 16:03:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:34:11.788 16:03:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:34:11.788 16:03:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:34:11.788 16:03:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:34:11.788 16:03:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:34:11.788 16:03:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:34:11.788 16:03:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:34:11.788 16:03:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:11.788 1+0 records in 00:34:11.788 1+0 records out 00:34:11.788 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342173 s, 12.0 MB/s 00:34:11.788 16:03:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:11.788 16:03:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:34:11.788 16:03:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:11.788 16:03:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:34:11.788 16:03:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:34:11.788 16:03:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:34:11.788 16:03:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:34:11.788 16:03:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:11.788 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:34:11.788 { 00:34:11.788 "nbd_device": "/dev/nbd0", 00:34:11.788 "bdev_name": "raid5f" 00:34:11.788 } 00:34:11.788 ]' 00:34:11.788 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:34:11.788 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:34:11.788 { 00:34:11.788 "nbd_device": "/dev/nbd0", 00:34:11.788 "bdev_name": "raid5f" 00:34:11.788 } 00:34:11.788 ]' 00:34:11.788 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:34:12.046 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:34:12.046 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:12.046 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:34:12.046 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:12.046 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:34:12.046 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:12.046 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:34:12.046 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:12.046 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:12.046 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:12.046 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:12.046 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:12.046 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:12.046 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:12.046 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:12.046 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:34:12.046 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:12.046 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:12.304 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:34:12.305 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:34:12.305 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:12.305 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:34:12.305 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:12.305 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:34:12.305 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:34:12.305 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:34:12.305 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:34:12.305 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:34:12.305 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:34:12.305 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:34:12.305 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:34:12.305 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:12.305 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:34:12.305 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:34:12.305 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:34:12.305 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:34:12.305 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:34:12.305 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:12.305 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:34:12.305 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:12.305 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:34:12.305 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:12.305 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:34:12.305 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:12.305 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:12.305 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:34:12.564 /dev/nbd0 00:34:12.564 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:12.564 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:12.564 16:03:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:34:12.564 16:03:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:34:12.564 16:03:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:34:12.564 16:03:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:34:12.564 16:03:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:34:12.564 16:03:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:34:12.564 16:03:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:34:12.564 16:03:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:34:12.564 16:03:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:12.564 1+0 records in 00:34:12.564 1+0 records out 00:34:12.564 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027355 s, 15.0 MB/s 00:34:12.564 16:03:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:12.564 16:03:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:34:12.564 16:03:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:12.564 16:03:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:34:12.564 16:03:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:34:12.564 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:12.564 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:12.564 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:34:12.564 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:12.564 16:03:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:12.823 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:34:12.823 { 00:34:12.823 "nbd_device": "/dev/nbd0", 00:34:12.823 "bdev_name": "raid5f" 00:34:12.823 } 00:34:12.823 ]' 00:34:12.823 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:34:12.823 { 00:34:12.823 "nbd_device": "/dev/nbd0", 00:34:12.823 "bdev_name": "raid5f" 00:34:12.823 } 00:34:12.823 ]' 00:34:12.823 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:12.823 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:34:12.823 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:34:12.823 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:12.823 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:34:12.823 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:34:12.823 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:34:12.823 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:34:12.823 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:34:12.823 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:34:12.823 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:34:12.823 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:34:12.823 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:34:12.823 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:34:12.823 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:34:12.823 256+0 records in 00:34:12.823 256+0 records out 00:34:12.823 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00558965 s, 188 MB/s 00:34:12.823 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:34:12.823 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:34:12.823 256+0 records in 00:34:12.823 256+0 records out 00:34:12.823 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244791 s, 42.8 MB/s 00:34:12.823 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:34:12.823 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:34:12.823 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:34:12.823 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:34:12.823 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:34:12.823 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:34:12.823 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:34:12.823 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:34:12.823 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:34:12.823 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:34:12.823 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:34:12.823 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:12.823 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:34:12.823 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:12.823 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:34:12.823 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:12.823 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:34:13.081 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:13.081 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:13.081 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:13.081 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:13.081 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:13.081 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:13.081 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:13.081 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:13.081 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:34:13.081 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:13.081 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:13.340 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:34:13.340 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:13.340 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:34:13.340 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:34:13.340 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:13.340 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:34:13.340 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:34:13.340 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:34:13.340 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:34:13.340 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:34:13.340 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:34:13.340 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:34:13.340 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:34:13.340 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:13.340 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:34:13.340 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:34:13.598 malloc_lvol_verify 00:34:13.598 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:34:13.598 8341a6ad-be81-4c14-b4ce-a6af975f7225 00:34:13.598 16:03:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:34:13.857 005c7273-d10c-49b8-9ec3-0eb509a69fc8 00:34:13.857 16:03:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:34:14.115 /dev/nbd0 00:34:14.115 16:03:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:34:14.115 16:03:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:34:14.115 16:03:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:34:14.115 16:03:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:34:14.115 16:03:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:34:14.115 mke2fs 1.47.0 (5-Feb-2023) 00:34:14.115 Discarding device blocks: 0/4096 done 00:34:14.115 Creating filesystem with 4096 1k blocks and 1024 inodes 00:34:14.115 00:34:14.115 Allocating group tables: 0/1 done 00:34:14.115 Writing inode tables: 0/1 done 00:34:14.115 Creating journal (1024 blocks): done 00:34:14.115 Writing superblocks and filesystem accounting information: 0/1 done 00:34:14.115 00:34:14.115 16:03:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:34:14.115 16:03:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:14.115 16:03:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:34:14.115 16:03:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:14.115 16:03:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:34:14.115 16:03:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:14.115 16:03:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:34:14.390 16:03:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:14.390 16:03:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:14.390 16:03:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:14.390 16:03:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:14.390 16:03:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:14.390 16:03:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:14.390 16:03:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:14.390 16:03:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:14.390 16:03:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 87181 00:34:14.390 16:03:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 87181 ']' 00:34:14.390 16:03:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 87181 00:34:14.390 16:03:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:34:14.390 16:03:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:14.390 16:03:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87181 00:34:14.390 16:03:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:14.390 killing process with pid 87181 00:34:14.390 16:03:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:14.390 16:03:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87181' 00:34:14.390 16:03:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@971 -- # kill 87181 00:34:14.390 16:03:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@976 -- # wait 87181 00:34:14.956 16:03:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:34:14.956 00:34:14.956 real 0m4.453s 00:34:14.956 user 0m6.482s 00:34:14.956 sys 0m0.879s 00:34:14.956 16:03:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:14.956 ************************************ 00:34:14.956 END TEST bdev_nbd 00:34:14.956 ************************************ 00:34:14.956 16:03:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:34:14.956 16:03:47 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:34:14.956 16:03:47 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:34:14.956 16:03:47 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:34:14.956 16:03:47 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:34:14.956 16:03:47 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:34:14.956 16:03:47 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:14.956 16:03:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:14.956 ************************************ 00:34:14.956 START TEST bdev_fio 00:34:14.956 ************************************ 00:34:14.956 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:34:14.956 16:03:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:34:14.956 16:03:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:34:14.956 16:03:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:34:14.956 16:03:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:34:14.956 16:03:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:34:14.957 16:03:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:34:14.957 16:03:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:34:14.957 16:03:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:34:14.957 16:03:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:34:14.957 16:03:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:34:14.957 16:03:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:34:14.957 16:03:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:34:14.957 16:03:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:34:14.957 16:03:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:34:14.957 16:03:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:34:14.957 16:03:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:34:14.957 16:03:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:34:15.217 16:03:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:34:15.217 16:03:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:34:15.217 16:03:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:34:15.217 16:03:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:34:15.217 16:03:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:34:15.217 16:03:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:34:15.217 16:03:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:34:15.217 16:03:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:34:15.217 16:03:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:34:15.217 16:03:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:34:15.217 16:03:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:34:15.217 16:03:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:34:15.217 16:03:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:34:15.217 16:03:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:15.217 16:03:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:34:15.217 ************************************ 00:34:15.217 START TEST bdev_fio_rw_verify 00:34:15.217 ************************************ 00:34:15.217 16:03:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:34:15.217 16:03:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:34:15.217 16:03:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:34:15.217 16:03:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:15.217 16:03:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:34:15.217 16:03:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:15.217 16:03:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:34:15.217 16:03:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:34:15.217 16:03:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:34:15.217 16:03:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:15.217 16:03:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:34:15.217 16:03:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:34:15.217 16:03:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:34:15.217 16:03:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:34:15.217 16:03:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:34:15.217 16:03:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:34:15.217 16:03:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:34:15.478 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:34:15.478 fio-3.35 00:34:15.478 Starting 1 thread 00:34:27.684 00:34:27.684 job_raid5f: (groupid=0, jobs=1): err= 0: pid=87364: Tue Nov 5 16:03:58 2024 00:34:27.684 read: IOPS=12.7k, BW=49.8MiB/s (52.2MB/s)(498MiB/10000msec) 00:34:27.684 slat (nsec): min=17127, max=86116, avg=18868.65, stdev=2329.21 00:34:27.684 clat (usec): min=8, max=392, avg=127.57, stdev=46.53 00:34:27.684 lat (usec): min=27, max=441, avg=146.43, stdev=47.21 00:34:27.684 clat percentiles (usec): 00:34:27.684 | 50.000th=[ 130], 99.000th=[ 241], 99.900th=[ 253], 99.990th=[ 289], 00:34:27.684 | 99.999th=[ 359] 00:34:27.684 write: IOPS=13.3k, BW=52.0MiB/s (54.5MB/s)(514MiB/9878msec); 0 zone resets 00:34:27.684 slat (usec): min=7, max=156, avg=15.93, stdev= 2.54 00:34:27.684 clat (usec): min=52, max=1304, avg=287.49, stdev=46.14 00:34:27.684 lat (usec): min=67, max=1460, avg=303.42, stdev=47.53 00:34:27.684 clat percentiles (usec): 00:34:27.684 | 50.000th=[ 289], 99.000th=[ 408], 99.900th=[ 424], 99.990th=[ 1004], 00:34:27.684 | 99.999th=[ 1237] 00:34:27.684 bw ( KiB/s): min=42232, max=58616, per=98.95%, avg=52677.95, stdev=5379.27, samples=19 00:34:27.684 iops : min=10558, max=14654, avg=13169.47, stdev=1344.81, samples=19 00:34:27.684 lat (usec) : 10=0.01%, 20=0.01%, 50=0.01%, 100=17.12%, 250=42.62% 00:34:27.684 lat (usec) : 500=40.24%, 750=0.01%, 1000=0.01% 00:34:27.684 lat (msec) : 2=0.01% 00:34:27.684 cpu : usr=99.15%, sys=0.33%, ctx=19, majf=0, minf=10364 00:34:27.684 IO depths : 1=7.6%, 2=19.9%, 4=55.1%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:27.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:27.684 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:27.684 issued rwts: total=127436,131461,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:27.684 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:27.684 00:34:27.684 Run status group 0 (all jobs): 00:34:27.684 READ: bw=49.8MiB/s (52.2MB/s), 49.8MiB/s-49.8MiB/s (52.2MB/s-52.2MB/s), io=498MiB (522MB), run=10000-10000msec 00:34:27.684 WRITE: bw=52.0MiB/s (54.5MB/s), 52.0MiB/s-52.0MiB/s (54.5MB/s-54.5MB/s), io=514MiB (538MB), run=9878-9878msec 00:34:27.684 ----------------------------------------------------- 00:34:27.684 Suppressions used: 00:34:27.684 count bytes template 00:34:27.684 1 7 /usr/src/fio/parse.c 00:34:27.684 82 7872 /usr/src/fio/iolog.c 00:34:27.684 1 8 libtcmalloc_minimal.so 00:34:27.684 1 904 libcrypto.so 00:34:27.684 ----------------------------------------------------- 00:34:27.684 00:34:27.684 00:34:27.684 real 0m11.782s 00:34:27.684 user 0m12.367s 00:34:27.684 sys 0m0.575s 00:34:27.684 16:03:59 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:27.684 ************************************ 00:34:27.684 END TEST bdev_fio_rw_verify 00:34:27.684 ************************************ 00:34:27.684 16:03:59 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:34:27.684 16:03:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:34:27.684 16:03:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:34:27.684 16:03:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:34:27.684 16:03:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:34:27.684 16:03:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:34:27.684 16:03:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:34:27.684 16:03:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:34:27.684 16:03:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:34:27.684 16:03:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:34:27.684 16:03:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:34:27.684 16:03:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:34:27.684 16:03:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:34:27.684 16:03:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:34:27.684 16:03:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:34:27.684 16:03:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:34:27.684 16:03:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:34:27.685 16:03:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:34:27.685 16:03:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "f0f89fa3-7aec-435b-818a-13ed8d8cbffd"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "f0f89fa3-7aec-435b-818a-13ed8d8cbffd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "f0f89fa3-7aec-435b-818a-13ed8d8cbffd",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "f5e7548c-5e9e-4a71-8302-f6d8cb31171c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "733ad2c6-b7cb-41aa-bd13-bb337ec26127",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "8dad4b3e-b2d0-4889-af0c-fbdee0e5b94d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:34:27.685 16:03:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:34:27.685 16:03:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:34:27.685 16:03:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:34:27.685 /home/vagrant/spdk_repo/spdk 00:34:27.685 16:03:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:34:27.685 16:03:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:34:27.685 00:34:27.685 real 0m11.946s 00:34:27.685 user 0m12.435s 00:34:27.685 sys 0m0.649s 00:34:27.685 16:03:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:27.685 16:03:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:34:27.685 ************************************ 00:34:27.685 END TEST bdev_fio 00:34:27.685 ************************************ 00:34:27.685 16:03:59 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:27.685 16:03:59 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:34:27.685 16:03:59 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:34:27.685 16:03:59 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:27.685 16:03:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:27.685 ************************************ 00:34:27.685 START TEST bdev_verify 00:34:27.685 ************************************ 00:34:27.685 16:03:59 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:34:27.685 [2024-11-05 16:03:59.411566] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:34:27.685 [2024-11-05 16:03:59.411678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87522 ] 00:34:27.685 [2024-11-05 16:03:59.569049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:27.685 [2024-11-05 16:03:59.649075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:27.685 [2024-11-05 16:03:59.649157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:27.685 Running I/O for 5 seconds... 00:34:29.995 18189.00 IOPS, 71.05 MiB/s [2024-11-05T16:04:03.344Z] 20338.00 IOPS, 79.45 MiB/s [2024-11-05T16:04:04.285Z] 21619.33 IOPS, 84.45 MiB/s [2024-11-05T16:04:05.222Z] 21751.50 IOPS, 84.97 MiB/s [2024-11-05T16:04:05.222Z] 21219.00 IOPS, 82.89 MiB/s 00:34:32.807 Latency(us) 00:34:32.807 [2024-11-05T16:04:05.222Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:32.807 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:32.807 Verification LBA range: start 0x0 length 0x2000 00:34:32.807 raid5f : 5.01 10881.83 42.51 0.00 0.00 17485.26 200.07 17241.01 00:34:32.807 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:34:32.807 Verification LBA range: start 0x2000 length 0x2000 00:34:32.807 raid5f : 5.01 10339.22 40.39 0.00 0.00 18601.92 153.60 17745.13 00:34:32.807 [2024-11-05T16:04:05.222Z] =================================================================================================================== 00:34:32.807 [2024-11-05T16:04:05.222Z] Total : 21221.05 82.89 0.00 0.00 18029.34 153.60 17745.13 00:34:33.375 00:34:33.375 real 0m6.323s 00:34:33.375 user 0m11.832s 00:34:33.375 sys 0m0.193s 00:34:33.375 16:04:05 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:33.375 ************************************ 00:34:33.375 END TEST bdev_verify 00:34:33.375 ************************************ 00:34:33.375 16:04:05 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:34:33.375 16:04:05 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:34:33.375 16:04:05 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:34:33.375 16:04:05 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:33.375 16:04:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:33.375 ************************************ 00:34:33.375 START TEST bdev_verify_big_io 00:34:33.375 ************************************ 00:34:33.375 16:04:05 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:34:33.375 [2024-11-05 16:04:05.785918] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:34:33.375 [2024-11-05 16:04:05.786039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87609 ] 00:34:33.634 [2024-11-05 16:04:05.941667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:33.634 [2024-11-05 16:04:06.029704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:33.634 [2024-11-05 16:04:06.029789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:34.208 Running I/O for 5 seconds... 00:34:36.088 1012.00 IOPS, 63.25 MiB/s [2024-11-05T16:04:09.876Z] 1108.50 IOPS, 69.28 MiB/s [2024-11-05T16:04:10.809Z] 1100.00 IOPS, 68.75 MiB/s [2024-11-05T16:04:11.742Z] 1142.00 IOPS, 71.38 MiB/s 00:34:39.327 Latency(us) 00:34:39.327 [2024-11-05T16:04:11.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:39.327 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:39.327 Verification LBA range: start 0x0 length 0x200 00:34:39.327 raid5f : 5.07 575.81 35.99 0.00 0.00 5448843.52 125.24 245205.46 00:34:39.327 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:39.327 Verification LBA range: start 0x200 length 0x200 00:34:39.327 raid5f : 5.06 576.84 36.05 0.00 0.00 5402250.44 159.11 245205.46 00:34:39.327 [2024-11-05T16:04:11.742Z] =================================================================================================================== 00:34:39.327 [2024-11-05T16:04:11.742Z] Total : 1152.65 72.04 0.00 0.00 5425546.98 125.24 245205.46 00:34:39.893 00:34:39.893 real 0m6.408s 00:34:39.893 user 0m12.004s 00:34:39.893 sys 0m0.191s 00:34:39.893 16:04:12 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:39.893 ************************************ 00:34:39.893 END TEST bdev_verify_big_io 00:34:39.893 ************************************ 00:34:39.893 16:04:12 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:34:39.893 16:04:12 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:39.893 16:04:12 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:34:39.893 16:04:12 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:39.893 16:04:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:39.893 ************************************ 00:34:39.893 START TEST bdev_write_zeroes 00:34:39.893 ************************************ 00:34:39.893 16:04:12 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:39.893 [2024-11-05 16:04:12.242902] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:34:39.893 [2024-11-05 16:04:12.243042] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87698 ] 00:34:40.151 [2024-11-05 16:04:12.398176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:40.151 [2024-11-05 16:04:12.486319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:40.408 Running I/O for 1 seconds... 00:34:41.782 26319.00 IOPS, 102.81 MiB/s 00:34:41.782 Latency(us) 00:34:41.782 [2024-11-05T16:04:14.197Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:41.782 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:41.782 raid5f : 1.01 26276.97 102.64 0.00 0.00 4854.80 1209.90 17241.01 00:34:41.782 [2024-11-05T16:04:14.197Z] =================================================================================================================== 00:34:41.782 [2024-11-05T16:04:14.197Z] Total : 26276.97 102.64 0.00 0.00 4854.80 1209.90 17241.01 00:34:42.348 00:34:42.348 real 0m2.519s 00:34:42.348 user 0m2.232s 00:34:42.348 sys 0m0.163s 00:34:42.348 16:04:14 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:42.348 ************************************ 00:34:42.348 END TEST bdev_write_zeroes 00:34:42.348 ************************************ 00:34:42.348 16:04:14 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:34:42.348 16:04:14 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:42.348 16:04:14 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:34:42.348 16:04:14 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:42.348 16:04:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:42.348 ************************************ 00:34:42.348 START TEST bdev_json_nonenclosed 00:34:42.348 ************************************ 00:34:42.348 16:04:14 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:42.606 [2024-11-05 16:04:14.819050] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:34:42.607 [2024-11-05 16:04:14.819165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87744 ] 00:34:42.607 [2024-11-05 16:04:14.976031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:42.864 [2024-11-05 16:04:15.073112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:42.864 [2024-11-05 16:04:15.073190] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:34:42.864 [2024-11-05 16:04:15.073210] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:34:42.864 [2024-11-05 16:04:15.073219] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:42.864 00:34:42.864 real 0m0.489s 00:34:42.864 user 0m0.303s 00:34:42.864 sys 0m0.082s 00:34:42.864 ************************************ 00:34:42.864 END TEST bdev_json_nonenclosed 00:34:42.864 ************************************ 00:34:42.864 16:04:15 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:42.864 16:04:15 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:34:43.122 16:04:15 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:43.122 16:04:15 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:34:43.122 16:04:15 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:43.122 16:04:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:43.122 ************************************ 00:34:43.122 START TEST bdev_json_nonarray 00:34:43.122 ************************************ 00:34:43.122 16:04:15 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:43.122 [2024-11-05 16:04:15.370093] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.03.0 initialization... 00:34:43.122 [2024-11-05 16:04:15.370201] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87774 ] 00:34:43.122 [2024-11-05 16:04:15.530111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:43.380 [2024-11-05 16:04:15.626772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:43.380 [2024-11-05 16:04:15.626870] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:34:43.380 [2024-11-05 16:04:15.626888] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:34:43.380 [2024-11-05 16:04:15.626902] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:43.638 00:34:43.638 real 0m0.496s 00:34:43.638 user 0m0.300s 00:34:43.638 sys 0m0.093s 00:34:43.638 16:04:15 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:43.638 16:04:15 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:34:43.638 ************************************ 00:34:43.638 END TEST bdev_json_nonarray 00:34:43.638 ************************************ 00:34:43.638 16:04:15 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:34:43.638 16:04:15 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:34:43.638 16:04:15 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:34:43.638 16:04:15 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:34:43.638 16:04:15 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:34:43.638 16:04:15 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:34:43.638 16:04:15 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:34:43.638 16:04:15 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:34:43.638 16:04:15 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:34:43.638 16:04:15 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:34:43.638 16:04:15 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:34:43.638 00:34:43.638 real 0m39.771s 00:34:43.638 user 0m55.221s 00:34:43.638 sys 0m3.356s 00:34:43.638 16:04:15 blockdev_raid5f -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:43.638 16:04:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:43.638 ************************************ 00:34:43.638 END TEST blockdev_raid5f 00:34:43.638 ************************************ 00:34:43.638 16:04:15 -- spdk/autotest.sh@194 -- # uname -s 00:34:43.638 16:04:15 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:34:43.638 16:04:15 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:34:43.638 16:04:15 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:34:43.638 16:04:15 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:34:43.638 16:04:15 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:34:43.638 16:04:15 -- spdk/autotest.sh@256 -- # timing_exit lib 00:34:43.638 16:04:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:43.638 16:04:15 -- common/autotest_common.sh@10 -- # set +x 00:34:43.638 16:04:15 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:34:43.638 16:04:15 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:34:43.638 16:04:15 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:34:43.638 16:04:15 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:34:43.638 16:04:15 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:34:43.639 16:04:15 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:34:43.639 16:04:15 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:34:43.639 16:04:15 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:34:43.639 16:04:15 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:34:43.639 16:04:15 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:34:43.639 16:04:15 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:34:43.639 16:04:15 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:43.639 16:04:15 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:34:43.639 16:04:15 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:43.639 16:04:15 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:34:43.639 16:04:15 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:43.639 16:04:15 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:43.639 16:04:15 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:34:43.639 16:04:15 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:34:43.639 16:04:15 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:34:43.639 16:04:15 -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:43.639 16:04:15 -- common/autotest_common.sh@10 -- # set +x 00:34:43.639 16:04:15 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:34:43.639 16:04:15 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:34:43.639 16:04:15 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:34:43.639 16:04:15 -- common/autotest_common.sh@10 -- # set +x 00:34:45.021 INFO: APP EXITING 00:34:45.021 INFO: killing all VMs 00:34:45.021 INFO: killing vhost app 00:34:45.021 INFO: EXIT DONE 00:34:45.345 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:45.345 Waiting for block devices as requested 00:34:45.345 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:34:45.345 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:45.911 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:45.912 Cleaning 00:34:45.912 Removing: /var/run/dpdk/spdk0/config 00:34:45.912 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:45.912 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:45.912 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:45.912 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:45.912 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:45.912 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:45.912 Removing: /dev/shm/spdk_tgt_trace.pid56058 00:34:45.912 Removing: /var/run/dpdk/spdk0 00:34:45.912 Removing: /var/run/dpdk/spdk_pid55856 00:34:45.912 Removing: /var/run/dpdk/spdk_pid56058 00:34:46.170 Removing: /var/run/dpdk/spdk_pid56276 00:34:46.170 Removing: /var/run/dpdk/spdk_pid56369 00:34:46.170 Removing: /var/run/dpdk/spdk_pid56403 00:34:46.170 Removing: /var/run/dpdk/spdk_pid56526 00:34:46.170 Removing: /var/run/dpdk/spdk_pid56544 00:34:46.170 Removing: /var/run/dpdk/spdk_pid56732 00:34:46.170 Removing: /var/run/dpdk/spdk_pid56830 00:34:46.170 Removing: /var/run/dpdk/spdk_pid56921 00:34:46.170 Removing: /var/run/dpdk/spdk_pid57032 00:34:46.170 Removing: /var/run/dpdk/spdk_pid57129 00:34:46.170 Removing: /var/run/dpdk/spdk_pid57163 00:34:46.170 Removing: /var/run/dpdk/spdk_pid57199 00:34:46.170 Removing: /var/run/dpdk/spdk_pid57275 00:34:46.170 Removing: /var/run/dpdk/spdk_pid57354 00:34:46.170 Removing: /var/run/dpdk/spdk_pid57790 00:34:46.170 Removing: /var/run/dpdk/spdk_pid57848 00:34:46.170 Removing: /var/run/dpdk/spdk_pid57906 00:34:46.170 Removing: /var/run/dpdk/spdk_pid57922 00:34:46.170 Removing: /var/run/dpdk/spdk_pid58024 00:34:46.170 Removing: /var/run/dpdk/spdk_pid58040 00:34:46.170 Removing: /var/run/dpdk/spdk_pid58142 00:34:46.170 Removing: /var/run/dpdk/spdk_pid58158 00:34:46.170 Removing: /var/run/dpdk/spdk_pid58211 00:34:46.170 Removing: /var/run/dpdk/spdk_pid58229 00:34:46.170 Removing: /var/run/dpdk/spdk_pid58282 00:34:46.170 Removing: /var/run/dpdk/spdk_pid58295 00:34:46.170 Removing: /var/run/dpdk/spdk_pid58449 00:34:46.170 Removing: /var/run/dpdk/spdk_pid58486 00:34:46.170 Removing: /var/run/dpdk/spdk_pid58569 00:34:46.170 Removing: /var/run/dpdk/spdk_pid59794 00:34:46.170 Removing: /var/run/dpdk/spdk_pid59995 00:34:46.170 Removing: /var/run/dpdk/spdk_pid60124 00:34:46.170 Removing: /var/run/dpdk/spdk_pid60723 00:34:46.170 Removing: /var/run/dpdk/spdk_pid60920 00:34:46.170 Removing: /var/run/dpdk/spdk_pid61057 00:34:46.170 Removing: /var/run/dpdk/spdk_pid61657 00:34:46.170 Removing: /var/run/dpdk/spdk_pid61969 00:34:46.170 Removing: /var/run/dpdk/spdk_pid62099 00:34:46.170 Removing: /var/run/dpdk/spdk_pid63407 00:34:46.170 Removing: /var/run/dpdk/spdk_pid63649 00:34:46.170 Removing: /var/run/dpdk/spdk_pid63784 00:34:46.170 Removing: /var/run/dpdk/spdk_pid65100 00:34:46.170 Removing: /var/run/dpdk/spdk_pid65336 00:34:46.170 Removing: /var/run/dpdk/spdk_pid65471 00:34:46.170 Removing: /var/run/dpdk/spdk_pid66784 00:34:46.170 Removing: /var/run/dpdk/spdk_pid67202 00:34:46.170 Removing: /var/run/dpdk/spdk_pid67337 00:34:46.170 Removing: /var/run/dpdk/spdk_pid68739 00:34:46.170 Removing: /var/run/dpdk/spdk_pid68986 00:34:46.170 Removing: /var/run/dpdk/spdk_pid69122 00:34:46.170 Removing: /var/run/dpdk/spdk_pid70520 00:34:46.170 Removing: /var/run/dpdk/spdk_pid70762 00:34:46.170 Removing: /var/run/dpdk/spdk_pid70897 00:34:46.170 Removing: /var/run/dpdk/spdk_pid72307 00:34:46.170 Removing: /var/run/dpdk/spdk_pid72767 00:34:46.170 Removing: /var/run/dpdk/spdk_pid72902 00:34:46.170 Removing: /var/run/dpdk/spdk_pid73034 00:34:46.170 Removing: /var/run/dpdk/spdk_pid73435 00:34:46.170 Removing: /var/run/dpdk/spdk_pid74141 00:34:46.170 Removing: /var/run/dpdk/spdk_pid74497 00:34:46.170 Removing: /var/run/dpdk/spdk_pid75178 00:34:46.170 Removing: /var/run/dpdk/spdk_pid75607 00:34:46.170 Removing: /var/run/dpdk/spdk_pid76331 00:34:46.170 Removing: /var/run/dpdk/spdk_pid76719 00:34:46.170 Removing: /var/run/dpdk/spdk_pid78588 00:34:46.170 Removing: /var/run/dpdk/spdk_pid79004 00:34:46.170 Removing: /var/run/dpdk/spdk_pid79426 00:34:46.170 Removing: /var/run/dpdk/spdk_pid81416 00:34:46.170 Removing: /var/run/dpdk/spdk_pid81874 00:34:46.170 Removing: /var/run/dpdk/spdk_pid82373 00:34:46.170 Removing: /var/run/dpdk/spdk_pid83408 00:34:46.170 Removing: /var/run/dpdk/spdk_pid83718 00:34:46.170 Removing: /var/run/dpdk/spdk_pid84612 00:34:46.170 Removing: /var/run/dpdk/spdk_pid84918 00:34:46.170 Removing: /var/run/dpdk/spdk_pid85816 00:34:46.170 Removing: /var/run/dpdk/spdk_pid86128 00:34:46.170 Removing: /var/run/dpdk/spdk_pid86776 00:34:46.170 Removing: /var/run/dpdk/spdk_pid87034 00:34:46.170 Removing: /var/run/dpdk/spdk_pid87090 00:34:46.170 Removing: /var/run/dpdk/spdk_pid87127 00:34:46.170 Removing: /var/run/dpdk/spdk_pid87349 00:34:46.170 Removing: /var/run/dpdk/spdk_pid87522 00:34:46.170 Removing: /var/run/dpdk/spdk_pid87609 00:34:46.170 Removing: /var/run/dpdk/spdk_pid87698 00:34:46.170 Removing: /var/run/dpdk/spdk_pid87744 00:34:46.170 Removing: /var/run/dpdk/spdk_pid87774 00:34:46.170 Clean 00:34:46.429 16:04:18 -- common/autotest_common.sh@1451 -- # return 0 00:34:46.429 16:04:18 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:34:46.429 16:04:18 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:46.429 16:04:18 -- common/autotest_common.sh@10 -- # set +x 00:34:46.429 16:04:18 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:34:46.429 16:04:18 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:46.429 16:04:18 -- common/autotest_common.sh@10 -- # set +x 00:34:46.429 16:04:18 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:46.429 16:04:18 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:34:46.429 16:04:18 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:34:46.429 16:04:18 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:34:46.429 16:04:18 -- spdk/autotest.sh@394 -- # hostname 00:34:46.429 16:04:18 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:34:46.687 geninfo: WARNING: invalid characters removed from testname! 00:35:08.641 16:04:40 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:11.926 16:04:43 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:13.825 16:04:46 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:16.370 16:04:48 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:18.941 16:04:51 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:20.845 16:04:52 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:22.227 16:04:54 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:22.227 16:04:54 -- spdk/autorun.sh@1 -- $ timing_finish 00:35:22.227 16:04:54 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:35:22.227 16:04:54 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:22.227 16:04:54 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:35:22.227 16:04:54 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:22.489 + [[ -n 5001 ]] 00:35:22.489 + sudo kill 5001 00:35:22.500 [Pipeline] } 00:35:22.517 [Pipeline] // timeout 00:35:22.523 [Pipeline] } 00:35:22.537 [Pipeline] // stage 00:35:22.543 [Pipeline] } 00:35:22.558 [Pipeline] // catchError 00:35:22.568 [Pipeline] stage 00:35:22.570 [Pipeline] { (Stop VM) 00:35:22.583 [Pipeline] sh 00:35:22.869 + vagrant halt 00:35:25.424 ==> default: Halting domain... 00:35:29.666 [Pipeline] sh 00:35:29.948 + vagrant destroy -f 00:35:32.496 ==> default: Removing domain... 00:35:32.818 [Pipeline] sh 00:35:33.103 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:35:33.114 [Pipeline] } 00:35:33.128 [Pipeline] // stage 00:35:33.134 [Pipeline] } 00:35:33.148 [Pipeline] // dir 00:35:33.154 [Pipeline] } 00:35:33.169 [Pipeline] // wrap 00:35:33.175 [Pipeline] } 00:35:33.186 [Pipeline] // catchError 00:35:33.195 [Pipeline] stage 00:35:33.197 [Pipeline] { (Epilogue) 00:35:33.210 [Pipeline] sh 00:35:33.495 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:38.785 [Pipeline] catchError 00:35:38.787 [Pipeline] { 00:35:38.806 [Pipeline] sh 00:35:39.091 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:39.091 Artifacts sizes are good 00:35:39.101 [Pipeline] } 00:35:39.113 [Pipeline] // catchError 00:35:39.122 [Pipeline] archiveArtifacts 00:35:39.130 Archiving artifacts 00:35:39.257 [Pipeline] cleanWs 00:35:39.268 [WS-CLEANUP] Deleting project workspace... 00:35:39.268 [WS-CLEANUP] Deferred wipeout is used... 00:35:39.274 [WS-CLEANUP] done 00:35:39.275 [Pipeline] } 00:35:39.289 [Pipeline] // stage 00:35:39.292 [Pipeline] } 00:35:39.304 [Pipeline] // node 00:35:39.310 [Pipeline] End of Pipeline 00:35:39.341 Finished: SUCCESS